U.S. lawmakers left Facebook in no doubt this week that revelations about the impact of its Instagram app on adolescent mental health have further damaged the company’s reputation.
Democratic Senator Richard Blumenthal said the social network was “untenably delinquent” in its behavior and had “chosen growth over children’s mental health,” after the Wall Street Journal (WSJ) reported that Facebook internal searches had reported concerns that its photo-sharing app was harming the well-being of young users.
Pressure on Facebook is likely to increase on Sunday when a whistleblower appears on American television claim that the company is lying to the public and investors about the effectiveness of its attempts to eliminate hate, violence and disinformation from its platforms.
The whistleblower, who has submitted thousands of internal documents to the US financial regulator, will then appear at a Senate hearing on Tuesday.
The WSJ report and the whistleblower’s appearance take place against a backdrop of active attempts to curb the power of Facebook and other tech companies. Here are some of the proposals being considered to regulate Facebook.
The US competition watchdog, the Federal Trade Commission, has filed a lawsuit demanding that Facebook sell Instagram and its WhatsApp messaging app. âAfter failing to compete with new innovators, he illegally bought or buried them when their popularity became an existential threat,â said Holly Vedova, director of the FTC.
An earlier lawsuit was dismissed by an American judge, but even if this one continues, it will be a battle of several years. If Facebook is forced to sell Instagram and WhatsApp, the question also arises as to whether this will help reduce disinformation, hate speech or harm to well-being on these platforms.
One idea launched in the book Social Warming, by former Guardian reporter Charles Arthur, is to divide Facebook into separate geographic entities, which would allow new Facebook companies to focus on moderating smaller networks.
Mark Zuckerberg, founder and CEO of Facebook, argued that only companies as large as Facebook have the resources to tackle disinformation, election interference and harmful content.
The Center for Countering Digital Hate, a campaign group based in the US and UK, says demanding more transparency from Facebook on several fronts, for example on lobbying, enforcing its own guidelines and its advertising system will make a positive difference. Imran Ahmed, the director general of the CCDH, argues that Facebook also needs to be more transparent about how its algorithms can spread disinformation and create discord.
“If users knew for sure what the algorithm was doing, that there is transparency and that governments, regulators and watchdogs can independently confirm whether Facebook’s algorithms lead to disinformation, companies from social media would find it impossible to continue doing business the way it is, âAhmed said.
Asked about transparency during Thursday’s hearing, Facebook’s global head of security Antigone Davis said the creation of bodies such as Facebook’s supervisory board underscored the company’s commitment to transparency.
Copy the safety invoice online – worldwide
In the UK, the Online Safety Bill is a landmark piece of legislation that imposes a duty of care on social media companies to protect users from harmful content. Social media companies are also required under the bill to submit a risk assessment for content that harms users to Ofcom, the communications watchdog.
According to the Conservative chairman of a Westminster committee reviewing the bill, Damian Collins, not declaring Instagram search in a risk assessment would subject Facebook to substantial fines under the terms of the bill. The legislation also gives Ofcom the power to control algorithms, which tailor the content a user consumes and is the subject of much debate among politicians on both sides of the Atlantic. Facebook says it shares the UK government’s goal of “making the internet safer while maintaining the vast social and economic benefits it brings.”
Reform article 230
Section 230 of the United States Communications Decency Act is considered a founding text for social media networks because, in general terms, it means that Internet companies cannot be sued for what users post to their website. platform – but they also can’t be pursued if they decide to take something down. Democratic Senator Amy Klobuchar is trying to change Section 230 so that social media companies are responsible for posting health misinformation. Along with her fellow Democratic Senators Mark Warner and Mazie Hirono, she also supports broader proposals change the law (Donald Trump called for the total repeal of section 230), and there are other proposals as well. It is a delicate question, even before we get to the First Amendment.
Give users more power over their data
Facebook’s all-important advertising system relies on user data, and regulators are wondering if users should have more control over that data. For example, users might have the power to withhold data if they believe a service does not meet their standards, which in turn could force social media companies to behave more responsibly.
Make sure the metaverse is properly regulated
Facebook’s next big strategic push is the metaverse, where people lead their personal and professional lives online, whether through virtual reality headsets or PokÃ©mon Go-style augmented reality (think a highly developed version of the product from glasses recently launched by Facebook). There are obvious implications for privacy in a virtual world hosted by Facebook, Google, or Apple – Facebook’s chief policy officer Nick Clegg talks about several interconnected metaavers – that regulators will need to consider, although Facebook says that a full-fledged metaverse lasts for many years. a way. Facebook last month launched a $ 50million (Â£ 37million) fund to help find solutions to these problems and said it would work with policymakers and experts.