Facebook is considering using on-device AI algorithms to scan and moderate content in its WhatsApp messaging service to enforce its acceptable speech policy.
If implemented, the app itself would automatically scan messages prior to their being encrypted and sent.
Experts warn, however, that such a setup would require WhatsApp to transmit prohibited messages to developers in order to improve the AI’s training.
Furthermore, concerns have been raised that the development could pave the way for governments to force social media firms to spy on user messages for them.
Facebook revealed its plans to transfer content moderation from human-staffed data centres to on-device, AI-powered systems in a presentation at the firm’s F8 annual developer conference on May 1, 2019.
The proposed concept would appear to see content moderation executed on user messages directly within WhatsApp, prior to their encryption — with the filtering algorithms themselves being regularly updated from a central source.
In this way, Facebook would be able to prevent users from sharing content that violates the firm’s acceptable speech guidelines, ostensibly without compromising the application of end-to-end encryption within the messaging service.
However, security experts have warned that the move to introduce on-app content moderation is tantamount to creating a backdoor within the device.
According to Forbes, the ongoing development and training of such content moderating algorithms would necessitate the app transmitting samples of prohibited, unencrypted messages back to Facebook for analysis.