diff --git a/README.org b/README.org index 232e1c5..ee476bc 100644 --- a/README.org +++ b/README.org @@ -590,7 +590,9 @@ To add to this problem, false-negatives from these systems can be disasterous. [[https://www.nytimes.com/2017/03/20/technology/youtube-lgbt-videos.html][YouTube has marked non-sexual LGBT+ videos as "sensitive"]], and many machine learning systems have been found to pick up -[[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing][racist assumptions]] from their surrounding environment. +[[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing][racist assumptions]] from their surrounding environment +(and other forms of "ambient bigotry" from the source society's +power dynamics as well, of course). This isn't to say that content filtering can't be a useful complement; if a user doesn't want to look at some content with certain words,