The appearance of Adam Mosseri, head of Instagram, before a subcommittee of the US Senate to respond to the alleged toxicity of the social network in the behavior and psychology of minors, has constituted a jug of cold water for all those who expected a tacit assumption of responsibility, even minimal, for part of the company, which belongs to Meta (formerly Facebook). In a written statement prior to his appearance, Mosseri, one of founder Mark Zuckerberg's lieutenants, has transferred this Wednesday to “the industry” the urgency of equipping himself with a body that addresses “how to verify age, design appropriate experiences and create parental controls “. Throwing the ball out, the executive added that Instagram “would adhere to these standards” to benefit from the so-called section 230, a key US internet standard that protects technology platforms from liability for content posted by users.
Before the members of the subcommittee, Mosseri, linked to Facebook since 2008, even defended the advantages of application by stating that it has a positive impact on that age group, flatly rejecting that its use could be harmful or harmful to the youngest. “The information published about our internal investigations [los llamados papeles de Facebook] has been misinterpreted,” the executive told the senators. Instagram, he added, “helps in moments as sensitive as adolescence.”
The audience was of additional interest: Mosseri is the first executive to appear before Congress after the September publication of the first Facebook papers , revelations from former employee Frances Haugen on Facebook's security deficit and malpractice, which led to the company's worst reputational crisis. Mosseri's purpose was to plug some of the waterways of the scandal, although legislators, Democrats and Republicans alike, did not make it easy for him, using data that shows that Instagram and its models of physical perfection and social success have harmed the mental health of one in three adolescents in the US, one of Haugen's most explosive disclosures based on internal company documents.
At its fifth hearing -in previous sessions, TikTok and YouTube officials appeared-, the Subcommittee on Consumer Protection and Data Protection has raised suspicions with Mosseri about the additional damage caused by algorithms , by recommending websites and applications that are harmful to their physical and psychological health to minors based on their interests and previous searches, as well as the lack of controls to keep young people away from the site. In September, Instagram reversed its project to create a version for children. Weeks later, Haugen revealed the results of internal research on the pernicious influence on the behavior of the youngest, especially girls.
One of those investigations, carried out in 2019, showed that Instagram returned a negative image to a third of users under 20 years of age. Another report from 2020 revealed that 32% of adolescents believed that the use of the photographic social network had aggravated the perception of their body, with which they were not satisfied. Facebook has defended itself by ensuring that the data has been taken out of context and minimizing investigations. In successive appearances before the US Congress, the British Parliament and the European Parliament, Haugen has underlined his concern about the perverse effect of the photographic network on children.
In one more example of the company's reactive policy, which tries to mitigate the damage -or the oil stain of the scandal- after this was verified, Instagram announced this Tuesday , on the eve of the Mosseri hearing, new safety tools for minors, such as the automatic pause function to limit the time they spend online; an announcement that was received with suspicion by the senators. Likewise, one day after a coordinated lawsuit by Rohingya refugees was filed against the company in the United States and the United Kingdom for promoting the dissemination of hateful messages – something that the company recognized in 2018 -, the firm today blocked company profiles linked to the Myanmar Army (formerly Burma), the main promoter of the attempted genocide of that Muslim minority.
More than the distant Myanmar crisis, or the almost forgotten Cambridge Analytica data leak scandal, Haugen's revelations reverberate even more about Facebook's battered image, describing the moral bankruptcy of a company that tolerated violent content , like hate speech in countries at war, for the sake of profit. Or that he turned a deaf ear for years to its worst effects on teenagers. The reaction to Haugen's leaks once again confirmed to the US Congress the need, even the urgency, to regulate giants like Facebook, and not only in terms of antitrust laws, the Administration's other open battlefront against large technology companies.
The alleged toxicity of the network in minors is the main drag on the company, which does not suffer from scandals from an economic point of view and which has decidedly opted for the metaverse to weather the storm. To justify Mosseri's summons, Democrat Richard Blumenthal, who chairs the Senate subcommittee, cited hundreds of calls and emails from parents about their children's negative experiences on Instagram. One parent recounted how his daughter's interest in physical exercise led the app to recommend accounts of draconian diets, eating disorders, and self-harm. “Something has been done terribly wrong, and what surprises me the most is the lack of action” in this regard, Blumenthal said at the beginning of the Senate session.
“It is inexcusable that Facebook, knowing the damage that Instagram was causing, took a decade to start to take measures in this regard ”, Blumenthal denounced before Mosseri. The Meta-Facebook executive defended himself stating that this is a problem that goes far beyond the photo-sharing platform, “that is why it is decisive that we address the issue of your security online as a challenge for the entire industry and that we adopt solutions and standards in the sector ”. The chairman of the subcommittee accused Instagram and the rest of the social networks of “exacerbating and fueling the mental health crisis” that the US is experiencing.
The subcommittee pointed directly to the algorithms that promote harmful recommendations for the development of minors. The audience's claim was “to hear directly from the leadership of the company why it uses algorithms that present harmful content to children, leading them to dark places, and what it will do [Meta] to make its platform more secure,” he said before from the Blumenthal session. The Senate body has proposed stricter data privacy controls to protect children, greater enforcement of age restrictions, and the ability for young users to delete information online.
In mid-November, several states in the country announced an investigation to determine if Meta had deliberately allowed children and adolescents to use Instagram even though they knew that the platform could affect their physical and mental health.
Subscribe here to the Newsfresh newsletter America and receive all the news codes of the region's current affairs