Cyan CEO Ian Stevenson discusses whether a solution is possible in balancing the needs for user privacy, safety and free speech online.
Online safety and privacy have driven debate in the safety tech sector throughout 2021. In August, the question of what measures are appropriate in building safer online experiences was re-energised following Apple’s announcement about child safety and Child Sexual Abuse Material (CSAM) scanning on its user devices in the US.
Perhaps unsurprisingly, there was swift backlash. Privacy advocates and groups responded to the potential rollout by arguing that this form ofsurveillance has the potential to be compromised or misused and that it would be a risky step towards impeding people’s right to privacy. Meanwhile, child safety organisations and charities argued the measures do not go far enough to help protect those most vulnerable online. By early September, Apple had backtracked.
Polarising debates on preserving or censoring free speech online have been in the headlines this year, too. During the delayed Euro 2020 tournament in July, several high-profile, black footballers fell victim to racist abuse on social media. Almost a dozen arrests followed, and widespread condemnation included criticism of social media platforms for not being transparent enough around what they constitute as abusive, or racist content – and not being firm or quick enough in the removal of posts, or the banishment of those responsible from their platforms.
These examples illuminate the ‘triangular battleground’ we face online, where there are no right answers, but plenty of wrong ones. Neither safety, privacy nor free speech can or should take absolute supremacy over any of the others. But defining how they should be balanced is extraordinarily challenging.
So, it begs the question: is a middle ground solution even possible?
Striking a balance to protect people from forms of harm is not a new societal challenge. As technology has advanced, so too has its impact on individual, personal choice. There are already numerous examples of harms that are mitigated by balancing personal privacy and safety, and most of them allow for an element of personal choice and are either required or limited by legislation (or both).
We already require young people to show ID to buy alcohol, and there are identity and security checks before we can board an aeroplane. These measures are generally accepted because they feel proportionate to the risks involved.
In the debate about online safety, this need for proportionality is equally important, and the parallels with things we already do in our everyday lives are often missed.
The need for solutions is not lost at a political level. This year we have seen Bills centred around managing online harms move through the UK, the US and the Australian Governments. There will, undoubtedly, be many left unsatisfied by the detail and content of these policies but it at least opens new channels for collaboration and debate.
The rise of end-to-end encryption (E2EE) in messaging creates particular challenges, with some opponents arguing this simply gives those seeking to cause harm an additional level of protection. E2EE, however, is not going to, and should not, go away. It was recently reinforced by the Information Commissioner’s Office (ICO) which confirmed that it shouldn’t be weakened with backdoors in the UK as the technology remains one of the most reliable approaches to data protection.
The tension between safety, privacy and free speech can make it seem like there can be no winners.
But such defeatist attitudes are in danger of becoming self-fulfilling prophecies, and dramatically underestimate human ingenuity. In our own work at Cyan and in our conversations with peers in the safety tech sector, we see great imagination and creativity being applied to creating practical and proportionate solutions using new technology.
To enable our policymakers to make good decisions around this issue, we must help them understand these new technologies – what they are, how they work, and what they can accomplish.
Many new safety technologies for example, are screening technologies that result in no, or virtually no, privacy cost unless or until a harm is identified. Good safety technology is far less intrusive than review by human moderators and can even help detect and prevent harm in end-to-end encrypted messaging environments where users’ expectation of privacy is extremely high without damaging user privacy or requiring the creation of back doors.
The challenges in this debate are not new, and they are not fundamentally technological in nature. The challenge is to collectively establish as a society how we can achieve the safety we all demand in an appropriate and proportionate way that fully respects rights of privacy and freedom.
This article first appeared on DIGIT.fy on 7th December 2021.
Please click here to start downloading your file.