Go to file
2019-07-18 15:30:54 -04:00
LICENSE.txt Add LICENSE.txt 2019-07-16 14:52:37 -04:00
NOTICE.txt Add NOTICE.txt 2019-07-18 14:44:53 -04:00
README.org Start anti-solutions section, rename "How to build it" 2019-07-18 15:30:54 -04:00

OcapPub: Towards networks of consent

This paper released under the Apache License version 2.0; see LICENSE.txt for details.

For a broader overview of various anti-spam techniques, see AP Unwanted Messages, which is in many ways informed this document but currently differs in some implementation rollout differs. (These two documents may converge.)

Conceptual overview

ActivityPub

ActivityPub is a federated social network protocol. It is generally fairly easily understood by reading the Overview section of the standard. In short, just as anyone can host their own email server, anyone can host their own ActivityPub server, and yet different users on different servers can interact. At the time of writing, ActivityPub is seeing major uptake, with several thousand nodes and several million registered users (with the caveat that registered users is not the same as active users). The wider network of ActivityPub-using programs is often called "the fediverse" (though this term predates ActivityPub, and was also used to describe adoption of its predecessor, OStatus).

ActivityPub defines both a client-to-server and server-to-server protocol, but at this time the server-to-server protocol is what is most popular and is the primary concern of this article.

ActivityPub's core design is fairly clean, following the actor model. Different entities on the network can create other actors/objects (such as someone writing a note) and communicate via message passing. A core set of behaviors are defined in the spec for common message types, but the system is extensible so that implementations may define new terms with minimal ambiguity. If two instances both understand the same terms, they may be able to operate using behaviors not defined in the original protocol. This is called an "open world assumption" and is necessary for a protocol as general as ActivityPub; it would be extremely egotistical of the ActivityPub authors to assume that we could predict all future needs of users.1

Unfortunately (mostly due to time constraints and lack of consensus), even though most of what is defined in ActivityPub is fairly clean/simple, ActivityPub needed to be released with "holes in the spec". Certain key aspects critical to a functioning ActivityPub server are not specified:

  • Authentication is not specified. Authentication is important to verify "did this entity really say this thing".2 However, the community has mostly converged on using HTTP Signatures to sign requests when delivering posts to other users. The advantage of HTTP Signatures is that they are extremely simple to implement and require no normalization of message structure; simply sign the body (and some headers) as-you-are-sending-it. The disadvantage of HTTP Signatures is that this signature does not "stick" to the original post and so cannot be "carried around" the network. A minority of implementations have implemented some early versions of Linked Data Proofs (formerly known as "Lined Data Signatures"), however this requires access to a normalization algorithm that not all users have a library for in their language, so Linked Data Proofs have not as of yet caught on as popularly as HTTP Signatures.
  • Authorization is also not specified. (Authentication and authorization are frequently confused (especially because in English, the words are so similar) but mean two very different things: the former is checking who said/did a thing, the latter is checking whether they are allowed to do a thing.) As of right now, authorization tends to be extremely ad-hoc in ActivityPub systems, sometimes as ad-hoc as unspecified heuristics from tracking who received messages previously, who sent a message the first time, and so on. The primary way this is worked around is sadly that interactions which require richer authorization simply have not been rolled out onto the ActivityPub network.

Compounding this situation is the general confusion/belief that autorization must stem from authentication. This document aims to show that not only is this not true, it is also a dangerous assumption with unintended consequences. An alternative approach based on "object capabilities" is demonstrated, showing that the actor model itself, if we take it at its purest form, is itself already a sufficient authorization system.

Unfortunately there is a complication. At the last minute of ActivityPub's standardization, sharedInbox was added as a form of mutated behavior from the previously described publicInbox (which was a place for servers to share public content). The motivation of sharedInbox is admirable: while ActivityPub is based on explicit message sending to actors' inbox endpoints, if an actor on server A needs to send a message to 1000 followers on server B, why should server A make 1000 separate requests when it could do it in one? A good point, but the primary mistake in how this one request is made; rather than sending one message with a listing of all 1000 recipients on that server (which would preserve the actor model integrity), it was advocated that servers are already tracking follower information, so the receiving server can decide whom to send the message to. Unfortunately this decision breaks the actor model and also our suggested solution to authorization; see MultiBox for a suggestion on how we can solve this.

Despite these issues, ActivityPub has achieved major adoption. ActivityPub has the good fortune that its earliest adopters tended to be people who cared about human rights and the needs of marginalized groups, and spam has been relatively minimal.

The mess we're in

Mastodon Is Like Twitter Without Nazis, So Why Are We Not Using It? Article by Sarah Jeong, which drove much interest in adoption of Mastodon and the surrounding "fediverse"

At the time this article was written about Mastodon (by far the most popular implementation of ActivityPub, and also largely responsible for driving interest in the protocol amongst other projects), its premise was semi-true; while it wasn't that there were no neo-nazis on the fediverse, the primary group which had driven recent adoption were themselves marginalized groups who felt betrayed by ther larger centralized social networks. They decided it was time for them to make homes for themselves. The article participated in an ongoing narrative that (from the author's perspective) helped reinforce these community norms for the better.

However, there nothing about Mastodon or the fediverse at large (including the core of ActivityPub) specifically prevented nazis or other entities conveying undesirable messages (including spam) from entering the network; they just weren't there or were in small enough numbers that instance administrators could block them. However, the fediverse no longer has the luxury of claiming to be neo-nazi free (if it ever could). The risk that people from marginalized groups, which the fediverse has in recent history appealed to, are now at risk from targeted harassment from these groups. Even untargeted messages, such as general hate speech, may have a severe negative impact on one's well being. Spam, likewise, is an increasing topic of administrators and implementors (as it has historically been for other federated social protocols, such as email/SMTP and OStatus during its heyday). It appears that the same nature of decentralized social networks in allowing marginalized communities to make communities for themselves also means that harassment, hate speech, and spam are not possible to wholly eject from the system.

Must all good things come to an end?

Unwanted messages, from spam to harassment

One thing that spam and harassment have in common is that they are the delivery of messages that are not desired by their recipient. However, it would be a mistake to claim that the impact of the two are the same: spam is an annoyance, and mostly wastes time. Harassment wastes time, but may also cause trauma.

Nonetheless, despite the impact of spam and harassment being very different, the solutions are likely very similar. Unwanted messages tend to come from unwanted social connections. If the problem is users receiving unwanted messages, perhaps the solution comes in making intentional social connections. But how can we get from here to there?

Freedom of speech also means freedom to filter

As an intermediate step, we should throw out a source of confusion: what is "freedom of speech"? Does it mean that we have to listen to hate speech?

We can start by saying that freedom of speech and the freedom of assembly are critical tools. Indeed, these are some of the few tools we have against totalitarian authorities, of which the world is increasingly threatened by.

Nonetheless, we are under severe threat from neo-fascists. Neo-fascists play an interesting trick: they exercise their freedom of speech by espousing hate speech and, when people say they don't want to listen to them, say that this is censorship.

Except that freedom of speech merely means that you have the freedom to exercise your speech, somewhere. It does not mean that everyone has to listen to you. You also have the right to call someone an asshole, or stop listening to them. There is no requirement to read every spam that crosses your email inbox to preserve freedom of speech; neither is there to listen to someone who is being an asshole. The freedom to filter is the complement to freedom of speech. This applies to both individuals and to communities.

Indeed, the trick of neo-fascists ends in a particularly dangerous hook: they are not really interested in freedom of speech at all. They are interested in freedom of their speech, up until the point where they can gain enough power to prevent others from saying things they don't like. This is easily demonstrated; see how many people on the internet are willing to threaten women and minorities who exercise the smallest amount of autonomy, yet the moment that someone calls them out on their own bullshit, they cry censorship. Don't confuse an argument for "freeze peach" for an argument for "free speech".

Still, what can we do? Perhaps we cannot prevent assholes from joining the wider social network… but maybe we can develop a system where we don't have to hear them.

Did we borrow the wrong assumptions?

"What if we're making the wrong assumptions about our social networks? What if we're focusing on breadth, when we really should be focusing on depth?" from a conversation with Evan Prodromou, initial designer of both ActivityPub and OStatus' protcol designs

What is Evan trying to say here? Most contemporary social networks are run by surveillance capitalist organizations; in other words, their business model is based on as much "attention" as possible as they can sell to advertisers. Whether or not capitalism is a problem is left as an exercise for the reader, but hopefully most readers will agree that a business model based on destroying privacy can lead to undesirable outcomes. One such undesirable outcome is that these companies subtly affect the way people interact with each other not dependent on what is healthiest for people and their social relationships, but based on what will generate the most advertising revenue.

One egregious example of this is the prominence of the "follower count" in contemporary social networks, particularly Twitter. When visiting another user's profile, even someone who is aware of andd islikes its effect will have trouble not comparing follower counts and mentally using this as a value judgement, either about the other person or about themselves. Users are subconsciously tricked into playing a popularity contest, whether they want to play that game or not. Rather than being encouraged to develop a network of meaningful relationships with which they have meaningful communications, users face a subconscious pressure to tailor their messaging and even who else they follow to maximize their follower count.

So why on earth would we see follower counts also appear prominently on the federated social web, if these tools are generally built by teams that do not benefit from the same advertising structure? The answer is simple: it is what developers and users are both familiar with. This is not an accusation; in fact, it is a highly sympathetic position to take: the cost, for developers and users alike, of developing a system is lower by going with the familiar rather than researching the ideal. But the consequences may nonetheless be severe.

So it is too with how we build our notion of security and authorization, which developers tend to mimic from the systems they have already seen. Why wouldn't they? But it may be that these patterns are, in fact, anti-patterns. it may be time for some re-evaluation.

We must not pretend we can prevent what we can not

"By leading users and programmers to make decisions under a false sense of security about what others can be prevented from doing, ACLs seduce them into actions that compromise their own security." From an analysis from Mark S. Miller on whether preventing delegation is even possible

The object capability community has a phrase that is almost, but not entirely, right in my book: "Only prohibit what you can prevent". This seems almost right, except that there may be things that in-bound of a system, we cannot technically prevent, yet we prohibit from occurring anyhow, and which we may enforce at another abstraction layer, including social layers. So here is a slightly modified version of that phrase: "We must not pretend we can prevent what we can not."

This is important. There may be things which we strongly wish to prevent on a protocol level, but which are literally impossible to do on only that layer. If we misrepresent what we can and cannot prevent, we open our users to harm when those things that we actually knew we could not prevent come to pass.

A common example of something that cannot be prevented is the copying of information. Due to basic mathematical properties of the universe, it is literally impossible to prevent someone from copying information once they have it on the data transmission layer alone. This does not mean that there aren't other layers where we can't prohibit such activity, but we shouldn't pretend we can prevent it at the protocol layer.

For example, Alice may converse with her therapist over the protocol of sound wave vibrations (ie, simple human speech). Alice may be expressing information that is meant to be private, but there is nothing about speech traveling through the air that prevents the therapist from breaking confidence and gossiping about it to outside sources. But Alice could take her therapist to court, and her therapist could lose her license. But this is not on the protocol layer of ordinary human speech itself. Similarly, we could add a "please keep this private" flag to ActivityPub messages so that Alice could tell Bob to please not share her secrets. Bob, being a good friend, will probably comply, and maybe his client will help him cooperate by default. But "please" or "request" is really key to our interface, since from a protocol perspective, there is no guarantee that Bob will comply. However this does not mean there are no consequences for Bob if he betrays Alice's trust: Alice may stop being his friend, or at least unfollow him.

Likewise, it is not possible to attach a "do not delegate" flag onto any form of authority, whether it be an ocap or an ACL. If Alice tells Bob that Bob, and Bob alone, has been granted access to this tool, we should realize that as long as Bob wants to cooperate with Mallet and has communication access to him, he can always set up a proxy that can forward requests to Alice's tool as if they were Bob's. We are not endorsing this, but we are acknowledging it. Still, there is something we can do: we could wrap Bob's access to Alice's tool in such a way that it logs that this is the capability Alice handed to Bob being invoked every time it is invoked, and disable access if it is misused… whether due to Bob's actions, or Mallet's. In this way, even though Alice cannot prevent Bob from delegating authority, Alice can hold Bob accountable for the authority granted to him.

If we do not take this approach, we expose our users to harm. Users may believe their privacy is intact and may be unprepared for the point at which it is violated, and so on and so on.

We must not pretend we can prevent what we can not. This will be a guiding principle for the rest of this document.

Anti-solutions

In this section we discuss "solutions" that are, at least on their own, an insufficient foundation to solve the pressing problems this paper is trying to resolve. Some of these might be useful complementary tools, but are structurally insufficient to be the foundation of our approach.

Blocklists, allow-lists, and perimeter security

Access Control Lists

Content-centric filtering

Reputation scoring

Going back to centralization

A way forward: networks of consent

Must we boil the ocean?

How to build it

Object capabilities (ocaps)

Ocaps meet ActivityPub objects/actors

True names, public profiles, private profiles

Accountability and revocation in an ocap system

Rights amplification and group-style permissions

multiBox vs sharedInbox

Limitations

Future work

Petnames

Conclusions


1

The technology that ActivityPub uses to accomplish this is called json-ld and admittedly has been one of the most controvercial decisions in the ActivityPub specification. Most of the objections have surrounded the unavailability of json-ld libraries in some languages or the difficulty of mapping an open-world assumption onto strongly typed systems without an "other data" bucket. Since a project like ActivityPub must allow for the possibility of extensions, we cannot escape open-world assumptions. However, there may be things that can be done to improve happiness about what extension mechanism is used; these discussions are out of scope for this particular document, however.

2

Or more accurately, since users may appoint someone else to manage posting for them, "was this post really made by someone who is authorized to speak on behalf of this entity".