[en] 1/8 Last year, I created illustrations for the Center for Democracy and Technology (CDT). They published research on how end-to-end-encrypted messages could be moderated on platforms: https://cdt.org/insights/outside-looking-in-approaches-to-content-moderation-in-end-to-end-encrypted-systems/
Allow me to introduce my drawings to you!
[en] 2/8 There are several phases involved when moderating content in end-to-end encrypted systems: platform operators need to define the content to be moderated, then detect this type of content, evaluate their findings, then inform users about moderated content, allow users to appeal a decision, and educate them.
[en] 3/8 Detection phase of content moderation. Detection of content to be moderated takes place in a space of possibles: it can be done before or after messages are being sent. Content can be moderated actively or pro-actively. The detection of fraudulous messages can be done using content or metadata.
[en] 4/8 1- User reporting. This technique is called message franking, or more precisely: asymmetric message franking. It also works on decentralized messaging systems. Users can report fraudulous messages to a moderator who can transparently verify the reported message, the sender, and the receiver using a cryptographic signature and public key cryptography. (Moderation space: active, after sending)
[en] 5/8 2- Metadata analysis. Even encrypted messages leave traces. The metadata associated with a message is generally not encrypted and can be used to detect fraudulous messages. #Metadata is "data about data". (Moderation space: active or pro-active, after sending)
[en] 6/8 3- Traceability. Content moderation systems can also rely on storing information about sent messages. We can imagine this storage as a huge file cabinet. The storage can then be used to compare fraudulous messages which are being reported against stored messages. This technique can be added on top of message franking, and thus reveal the spreading of fake news. Facebook's first implementation of message franking was based on traceability. (Moderation space: active, after sending)
[en] 7/8 4- Perceptual hashing. This technique consists of storing fraudulous content, for example pornography, in a database, using a unique hash for each piece of content. Now, content that will be sent can be compared against content stored in the database and that was previously identified as problematic. This is the technique that Apple wanted to introduce to iCloud, but later abandoned. The #EU wants to use this technique for #chatcontrol. (Moderation space: pro-active, before sending)
[en] 8/8 5- Predictive modeling. This technique consists in teaching an algorithm how fraudulous messages or images look like. The algorithm can then predictively model how future fraudulous messages or images look like and signal them as having to be moderated. (Moderation space: pro-active or active, before or after sending)