If Nick Clegg wants to fix Meta, he must tackle his human rights problem | Frederique Kaltheuner

0

OWhen former UK Deputy Prime Minister Nick Clegg joined Facebook in 2018, the company was embroiled in several scandals. Cambridge Analytica had collected personal data from Facebook profiles. UN human rights experts said the platform had played a role in facilitating the ethnic cleansing of Rohingyas in Myanmar. His policies in the 2016 US presidential election had been criticized. Now, Clegg has taken on a leadership role as the company’s president of global affairs. Will he be able to tackle the seemingly endless problems with the way Facebook – which was recently rebranded as Meta – works?

For better or for worse, Meta and Google have become the infrastructure of the digital public sphere. On Facebook, people access news, social movements grow, human rights abuses are documented, and politicians engage with their constituents. This is where the problem lies. No company should wield so much power over the global public sphere.

Meta’s business model relies on pervasive surveillance, which is fundamentally incompatible with human rights. User tracking and profiling intrudes on their privacy and fuels algorithms that promote and amplify sensationalist, divisive content. Studies show that such content drives more engagement and, in turn, generates greater profits. The harms and risks posed by Facebook are unevenly distributed: online harassment can affect anyone, but research shows it disproportionately affects people who are marginalized because of their gender, race, origin ethnicity, religion or identity. And while disinformation is a global phenomenon, its effects are particularly severe in fragile democracies.

Despite his new title, Clegg won’t be able to solve these problems on his own. But there are several things it should do to protect the human rights of its users. For starters, he should listen to human rights activists. For years, they’ve recommended that Facebook do human rights due diligence before expanding into new countries, introducing new products or making changes to its services. They also recommended the company invest more in content moderation to effectively address human rights risks wherever people use its platforms.

The likelihood of online discourse causing harm, as it did in Myanmar, is inextricably linked to the inequality and discrimination that exists in a society. Meta needs to invest significantly in local expertise that can shed light on these issues. Over the past decade, Facebook has raced to conquer markets without fully understanding the societies and political environments in which it operates. It targeted countries in Africa, Asia and Latin America, promoting a Facebook-centric version of the internet. It has partnered with telecommunications companies to provide free access to Facebook and a limited number of approved websites. It bought out competitors such as WhatsApp and Instagram. This strategy had devastating consequences, allowing Facebook to become the dominant player in information ecosystems.

It’s also essential that Meta be more consistent, transparent, and accountable in how it moderates content. Here, there is precedent: The Santa Clara Principles for Transparency and Accountability in Content Moderation, developed by civil society and endorsed (but not implemented) by Facebook, set out standards to guide these efforts. Among other things, they call for understandable rules and policies, which should be accessible to people around the world in the languages ​​they speak, giving them the ability to meaningfully appeal decisions to remove or leave content.

Meta should also be more transparent about the algorithms that shape what people see on its sites. The company needs to address the role algorithms play in steering users to harmful misinformation and give users more agency to shape their online experiences. Facebook’s Xcheck system has exempted celebrities, politicians and other high profile users from the rules that apply to normal users. Instead of setting different rules for powerful actors, social media platforms should prioritize the rights of ordinary people – especially the most vulnerable among us.

As Meta tries to become the “metaverse,” these issues will only become more apparent. Digital environments that rely on extended reality (XR) technologies, such as virtual and augmented reality, are still at an early stage of development. But there are already signs that many of the same issues will apply in the Metaverse. VR glasses can collect and harvest user data, and some VR users have already reported a prevalence of online harassment and abuse in these settings.

Until now, Meta has not placed the rights of its users at the center of its business model. This would mean reckoning with its surveillance methods and radically increasing the means it devotes to respecting the rights of its users around the world. Rather than rebranding and pivoting to XR, where the potential for harm is likely to grow exponentially, Meta should press pause and redirect its focus to solving the very tangible problems it is creating in our current reality. The time has come to address this question.

Share.

About Author

Comments are closed.