/For a broader overview of various anti-spam techniques, see [[https://github.com/WebOfTrustInfo/rwot9-prague/blob/master/topics-and-advance-readings/ap-unwanted-messages.md][AP Unwanted Messages]], which is in many ways informed this document but currently differs in some implementation rollout differs. (These two documents may converge.)/
- Authentication is not specified. Authentication is important to
verify "did this entity really say this thing".[fn:did-you-say-it]
However, the community has mostly converged on using [[https://tools.ietf.org/html/draft-cavage-http-signatures-11][HTTP Signatures]]
to sign requests when delivering posts to other users.
The advantage of HTTP Signatures is that they are extremely simple
to implement and require no normalization of message structure;
simply sign the body (and some headers) as-you-are-sending-it.
The disadvantage of HTTP Signatures is that this signature does
not "stick" to the original post and so cannot be "carried around"
the network.
A minority of implementations have implemented some early versions
of [[https://w3c-dvcg.github.io/ld-proofs/][Linked Data Proofs]] (formerly known as "Lined Data Signatures"),
however this requires access to a normalization algorithm that not
all users have a library for in their language, so Linked Data Proofs
have not as of yet caught on as popularly as HTTP Signatures.
- Authorization is also not specified. (Authentication and
authorization are frequently confused (especially because in
English, the words are so similar) but mean two very different
things: the former is checking who said/did a thing, the latter is
checking whether they are allowed to do a thing.) As of right now,
authorization tends to be extremely ad-hoc in ActivityPub systems,
sometimes as ad-hoc as unspecified heuristics from tracking who
received messages previously, who sent a message the first time,
and so on. The primary way this is worked around is sadly that
interactions which require richer authorization simply have not
been rolled out onto the ActivityPub network.
Compounding this situation is the general confusion/belief that
autorization must stem from authentication.
This document aims to show that not only is this not true, it is also
a dangerous assumption with unintended consequences.
An alternative approach based on "object capabilities" is
demonstrated, showing that the actor model itself, if we take it at
its purest form, is itself already a sufficient authorization system.
# - sharedInbox is a break from the actor model protocol and was a late
# addition
Unfortunately there is a complication.
At the last minute of ActivityPub's standardization, =sharedInbox= was
added as a form of mutated behavior from the previously described
=publicInbox= (which was a place for servers to share public content).
The motivation of =sharedInbox= is admirable: while ActivityPub is based
on explicit message sending to actors' =inbox= endpoints, if an actor
on server A needs to send a message to 1000 followers on server B,
why should server A make 1000 separate requests when it could do it
in one?
A good point, but the primary mistake in how this one request is made;
rather than sending one message with a listing of all 1000 recipients
on that server (which would preserve the actor model integrity),
it was advocated that servers are already tracking follower information,
so the receiving server can decide whom to send the message to.
Unfortunately this decision breaks the actor model and also our suggested
solution to authorization; see [[https://github.com/WebOfTrustInfo/rwot9-prague/blob/master/topics-and-advance-readings/ap-unwanted-messages.md#org7937fed][MultiBox]] for a suggestion on how we
[[https://www.vice.com/en_us/article/783akg/mastodon-is-like-twitter-without-nazis-so-why-are-we-not-using-it][Mastodon Is Like Twitter Without Nazis, So Why Are We Not Using It?]]
-- Article by Sarah Jeong, which drove much interest in
adoption of Mastodon and the surrounding "fediverse"
#+END_QUOTE
At the time this article was written about Mastodon (by far the most
popular implementation of ActivityPub, and also largely responsible
for driving interest in the protocol amongst other projects), its
premise was semi-true; while it wasn't that there were no neo-nazis on
the fediverse, the primary group which had driven recent adoption were
themselves marginalized groups who felt betrayed by ther larger
centralized social networks.
They decided it was time for them to make homes for themselves.
The article participated in an ongoing narrative that (from the
author's perspective) helped reinforce these community norms for the
better.
However, there nothing about Mastodon or the fediverse at large
(including the core of ActivityPub) /specifically/ prevented nazis or
other entities conveying undesirable messages (including spam) from
entering the network; they just weren't there or were in small enough
numbers that instance administrators could block them.
However, the fediverse no longer has the luxury of
[[https://www.vice.com/en_us/article/mb8y3x/the-nazi-free-alternative-to-twitter-is-now-home-to-the-biggest-far-right-social-network][claiming to be neo-nazi free]] (if it ever could).
The risk that people from marginalized groups, which the fediverse has
in recent history appealed to, are now at risk from targeted harassment
from these groups.
Even untargeted messages, such as general hate speech, may have a
severe negative impact on one's well being.
Spam, likewise, is an increasing topic of administrators and
implementors (as it has historically been for other federated social
protocols, such as email/SMTP and OStatus during its heyday).
It appears that the same nature of decentralized social networks in
allowing marginalized communities to make communities for themselves
also means that harassment, hate speech, and spam are not possible
to wholly eject from the system.
Must all good things come to an end?
** Unwanted messages, from spam to harassment
One thing that spam and harassment have in common is that they are the
delivery of messages that are not desired by their recipient.
However, it would be a mistake to claim that the impact of the two are
the same: spam is an annoyance, and mostly wastes time.
Harassment wastes time, but may also cause trauma.
Nonetheless, despite the impact of spam and harassment being very
different, the solutions are likely very similar.
Unwanted messages tend to come from unwanted social connections.
If the problem is users receiving unwanted messages, perhaps the
solution comes in making intentional social connections.