mirror of
https://gitlab.com/spritely/ocappub.git
synced 2025-02-17 06:14:06 +00:00
Add "Don't pretend we can prevent what we can't"
This commit is contained in:
parent
f3fbdf16f6
commit
0c16d245a8
76
README.org
76
README.org
@ -310,13 +310,85 @@ it may be time for some re-evaluation.
|
|||||||
|
|
||||||
** Don't pretend we can prevent what we can't
|
** Don't pretend we can prevent what we can't
|
||||||
|
|
||||||
|
#+BEGIN_QUOTE
|
||||||
|
"By leading users and programmers to make decisions under a false
|
||||||
|
sense of security about what others can be prevented from doing,
|
||||||
|
ACLs seduce them into actions that compromise their own security."
|
||||||
|
-- From an analysis from Mark S. Miller on
|
||||||
|
[[http://erights.org/elib/capability/delegations.html][whether preventing delegation is even possible]]
|
||||||
|
#+END_QUOTE
|
||||||
|
|
||||||
# - introduce ocap community phrase
|
# - introduce ocap community phrase
|
||||||
# - introduce revised version
|
# - introduce revised version
|
||||||
|
|
||||||
# - "the fediverse is not indexed"
|
The object capability community has a phrase that is almost, but not
|
||||||
|
entirely, right in my book: "Only prohibit what you can prevent".
|
||||||
|
This seems almost right, except that there may be things that in-bound
|
||||||
|
of a system, we cannot technically prevent, yet we prohibit from
|
||||||
|
occurring anyhow, and which we may enforce at another abstraction
|
||||||
|
layer, including social layers.
|
||||||
|
So here is a slightly modified version of that phrase: "Don't pretend
|
||||||
|
we can prevent what we can't."
|
||||||
|
|
||||||
#
|
This is important.
|
||||||
|
There may be things which we strongly wish to prevent on a protocol
|
||||||
|
level, but which are literally impossible to do on only that layer.
|
||||||
|
If we misrepresent what we can and cannot prevent, we open our users
|
||||||
|
to harm when those things that we actually knew we could not prevent
|
||||||
|
come to pass.
|
||||||
|
|
||||||
|
A common example of something that cannot be prevented is the copying
|
||||||
|
of information.
|
||||||
|
Due to basic mathematical properties of the universe, it is literally
|
||||||
|
impossible to prevent someone from copying information once they have
|
||||||
|
it on the data transmission layer alone.
|
||||||
|
This does not mean that there aren't other layers where we can't
|
||||||
|
prohibit such activity, but we shouldn't pretend we can prevent it
|
||||||
|
at the protocol layer.
|
||||||
|
|
||||||
|
For example, Alice may converse with her therapist over the
|
||||||
|
protocol of sound wave vibrations (ie, simple human speech).
|
||||||
|
Alice may be expressing information that is meant to be private,
|
||||||
|
but there is nothing about speech traveling through the air that
|
||||||
|
prevents the therapist from breaking confidence and gossiping
|
||||||
|
about it to outside sources.
|
||||||
|
But Alice could take her therapist to court, and her therapist could
|
||||||
|
lose her license.
|
||||||
|
But this is not on the protocol layer of ordinary human speech itself.
|
||||||
|
Similarly, we could add a "please keep this private" flag to
|
||||||
|
ActivityPub messages so that Alice could tell Bob to please not
|
||||||
|
share her secrets.
|
||||||
|
Bob, being a good friend, will probably comply, and maybe his client
|
||||||
|
will help him cooperate by default.
|
||||||
|
But "please" or "request" is really key to our interface, since from
|
||||||
|
a protocol perspective, there is no guarantee that Bob will comply.
|
||||||
|
However this does not mean there are no consequences for Bob if he
|
||||||
|
betrays Alice's trust: Alice may stop being his friend, or at least
|
||||||
|
unfollow him.
|
||||||
|
|
||||||
|
Likewise, it is not possible to attach a "do not delegate" flag onto
|
||||||
|
any form of authority, whether it be an ocap or an ACL.
|
||||||
|
If Alice tells Bob that Bob, and Bob alone, has been granted access to
|
||||||
|
this tool, we should realize that as long as Bob wants to cooperate
|
||||||
|
with Mallet and has communication access to him, he can always set up
|
||||||
|
a proxy that can forward requests to Alice's tool as if they were
|
||||||
|
Bob's.
|
||||||
|
We are not endorsing this, but we are acknowledging it.
|
||||||
|
Still, there is something we can do: we could wrap Bob's access to
|
||||||
|
Alice's tool in such a way that it logs that this is the capability
|
||||||
|
Alice handed to Bob being invoked every time it is invoked, and
|
||||||
|
disable access if it is misused... whether due to Bob's actions,
|
||||||
|
or Mallet's.
|
||||||
|
In this way, even though Alice cannot prevent Bob from delegating
|
||||||
|
authority, Alice can hold Bob accountable for the authority granted
|
||||||
|
to him.
|
||||||
|
|
||||||
|
If we do not take this approach, we expose our users to harm.
|
||||||
|
Users may believe their privacy is intact and may be unprepared for
|
||||||
|
the point at which it is violated, and so on and so on.
|
||||||
|
|
||||||
|
We must not pretend we can prevent what we can not.
|
||||||
|
This will be a guiding principle for the rest of this document.
|
||||||
|
|
||||||
** Anti-solutions
|
** Anti-solutions
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user