Pages

17 June 2022

Lawmakers Want Social Media Companies to Stop Getting Kids Hooked


ALEXIS TAPIA OPENS TikTok every morning when she wakes up and every night before she goes to bed. The 16-year-old from Tucson, Arizona, says she has a complicated relationship with the social media app. Most of what flashes across her screen makes her smile, like funny videos that poke fun at the weirdness of puberty. She truly enjoys the app—until she has trouble putting it down. “There are millions of videos that pop up,” she says, describing the #ForYou page, the endless stream of content that acts as TikTok's home screen. “That makes it really hard to get off. I say I’m going to stop, but I don’t.”

Scrutiny of kids, particularly teens, and screens has intensified over the past months. Last fall, former Facebook product manager turned whistleblower Frances Haugen told a US Senate subcommittee that the company’s own research showed that some teens reported negative, addiction-like experiences on its photo-sharing service, Instagram. The damage was most pronounced among teenage girls. “We need to protect the kids,” said Haugen in her testimony.

Proposals to “protect the kids” have sprung up across the US, attempting to curb social media’s habit-forming allure on its youngest users. A bill in Minnesota would prevent platforms from using recommendation algorithms for children. In California, a proposal would allow parents to sue social media companies for addicting their kids. And in the US Senate, a sweeping bill called the Kids Online Safety Act would require social media companies, among other things, to create tools that allow parents to monitor screen time or turn off attention-sucking features like autoplay.

Social media’s negative impact on children and teens has worried parents, researchers, and lawmakers for years. But this latest surge in public interest seems to be ignited in the peculiar crucible of the Covid-19 pandemic: Parents who were able to shelter at home watched as their children’s social lives and school lives became entirely mediated by technology, raising concerns about time spent on screens. The fear and isolation of the past two years hit teens hard and has exacerbated what the US surgeon general recently called “devastating” mental health challenges facing adolescents.

Safety ’Net

Supporters of the new legislation have likened Big Tech’s mental health harms to kids with the dangers of cigarettes. “We’re at a place with social media companies and teenagers not unlike where we were with tobacco companies, where they were marketing products to kids and not being straightforward with the public,” says Jordan Cunningham, the California Assembly member spearheading AB 2408, along with Assembly member Buffy Wicks. The bill would allow parents to sue platforms like Instagram, Tiktok, and Snap if their child is harmed by a social media addiction. Social media companies aren’t financially incentivized to slow kids’ scroll, and “public shame only gets you so far,” Cunningham says.

But unlike the physical damage of tobacco, the exact relationship between social media use and kids’ mental health remains disputed. One high-profile study that tracked increases in rates of teenage depression, self-harm, and suicide in the US since 2012 proposed “heavy digital media use” as a contributing factor. But still other research has found that frequent social media use is not a strong risk factor for depression. Even the internal documents revealed by Haugen resist any simple interpretation: Facebook’s study had a sample size of only 40 teens, over half of whom reported that Instagram also helped counter feelings of loneliness. It’s also difficult to untangle the mental health harms of social media from other psychological harms in a child’s life, like health fears during an ongoing pandemic or the threat of school shootings, which leave a lasting psychological toll on students.

There isn’t a scientific consensus on what a social media addiction is, either. “I am concerned that the medical and psychological communities are still figuring out what defines a digital behavioral ‘addiction’ versus other terms like problematic media use,” says Jenny Radesky, who researches children, parenting, and digital media use at the University of Michigan C. S. Mott Children’s Hospital. In addition to her research, Radesky helps shape the American Academy of Pediatrics’ policy agenda on kids and technology. She also works with Designed With Kids in Mind, a campaign to raise awareness of how design techniques shape children’s online experiences.

Radesky advocates for a more nuanced interpretation of the relationship between social media and young people’s mental health. “People who are trying to ‘protect kids’ within digital spaces often are a bit paternalistic about it,” she says. Well-intentioned adults often regards kids as objects to be protected, not subjects of their own experience. Instead of focusing on minutes spent on screens, she suggests, it’s worth asking how kids build norms around technology. How are they integrating it with the rest of their lives and relationships? How can parents, policymakers, and voters take that into account?

But not every parent is in a position to engage in a real dialog with their kids about screen time. This poses an equity issue: Those who work multiple jobs, for example, may not be able to provide guardrails on screen time, and their children may be more prone to overuse than children of affluent parents.

Radesky says this is where legislation plays a key role. She testified in support of one proposal, the California Age-Appropriate Design Code. The bill, introduced by Wicks and Cunningham, would require platforms to create features in a way “that prioritizes the privacy, safety, and well-being of children.” The bill focuses on shoring up privacy protections for kids, like requiring high privacy settings and limiting data collection by default for kids. It would also prohibit the use of dark patterns and other design techniques that could compel a user to weaken a privacy setting.

The proposal has international precedent. It’s modeled on the Age-Appropriate Design code that passed in the UK in 2020. According to the 5Rights Foundation, the privacy nonprofit that supported the UK bill and is also backing the bill in California, several big tech companies have already altered their features for kids: YouTube turned off autoplay for kids by default, and TikTok no longer sends late-night push notifications to teens.

Eye Contract

Legislation on kids and social media, however, can also present privacy and enforcement challenges. Laws that require companies to identify which users are children incentivize businesses to set up age verification systems, whether in-house or through a third-party identification company. The unintended result of that is more corporate surveillance across the board.

“If you do it wrong, you end up collecting more information on everyone,” says Jason Kelley, associate director of digital strategy at the Electronic Frontier Foundation. It’s a flaw that the EFF finds in the California Age-Appropriate Design Code Act, as well as the federal Kids’ Online Safety Act, or KOSA, in the US Senate. KOSA would impose on platforms a “duty to act in the best interests” of children who use their services, including greater privacy protections and the requirement to allow parents and kids to turn off features like autoplay.

KOSA raises another legislative challenge: parental controls. Ideally, parental controls would be used to help a child manage screen time, a springboard for thoughtful, collaborative family discussions about their relationship to technology. But if a law demands controls that are overly broad, it puts the children of abusive parents in greater danger since it makes it easier for those parents to spy on their kids’ activities. (EFF opposes KOSA; Designed With Kids in Mind supports it.)

And then there are potential entanglements with Section 230. Any attempts to regulate social media have to reckon with the federal law that protects online platforms (including social media companies) from being held responsible for its users’ posts. While state-level legislation may target retention snares like recommendation algorithms and notifications, such as the proposals in Minnesota and California, EFF would argue that those features are a means of distributing speech, inextricable from user-generated content—and protected.

Effective legislation for kids and social media, Kelley says, would be privacy-protective for all users, regardless of age. It should recognize that “children” aren’t a monolith, either. Laws should have different privacy and autonomy needs across ages; the needs of a 10-year-old user are different from those of a 17-year-old.

Both the Age-Appropriate Design Code and the Social Media Duty Not to Addict Act have progressed to the California State Senate after passing through the Assembly with unanimous votes.

Design Rethinking

Crucially, social media addiction bills put public pressure on companies to radically retool their design processes. The engagement-inducing design mechanisms that keep kids strung along on a platform are probably familiar to late-scrolling grown-ups too: There are the notifications that rope you back onto an app after you’ve closed it. There’s autoplay, the cascade of new and dazzling dopamine hits. There are the “live” functions that fabricate a sense of don’t miss this urgency, gamification mechanisms like streaks, and nudges to share. All of them lead kids (and grown-ups) deeper into an app, a sort of digital Pied Piper effect.

Tech companies “are barely scratching the surface” of what they might do to help support young users, says Munmun de Chaudhury, who studies the intersection of social media and mental health and founded the Social Dynamics and Wellbeing Lab at Georgia Tech. Apps like TikTok and Instagram can be resources for teens to explore their identities, form communities, and learn about mental health. Instead of banning social media outright, she says, legislation should push companies to understand young people and to rethink the mechanisms that keep kids scrolling past their own comfort level without restricting the ways the platforms can be helpful.

Seventeen-year-old Saanvi Shetty and Shreya Karnik have a list of demands for legislators and tech companies. While Shetty and Karnik regularly outwit the algorithm as content creators (they run Voices of Gen Z, a youth-focused publication), they say that social media “absolutely” still damages their mental health. They want an indication that notes when an Instagram photo has been edited, they want companies to crack down on misinformation, and they want to be able to curate their feeds—so they can cut out content about, say, eating disorders and only see what they actually enjoy.

When reached for comment, a Meta representative referred WIRED to a statement clarifying its internal findings about teens’ use of Instagram. TikTok spokesperson Brooke Oberwetter pointed out safety features like screen time management settings that, by default, silence notifications at a certain hour for teens. When asked if TikTok is working on more features specifically designed for the safety and well-being of younger users, she said, “We’ll be looking at bringing on more features like this in the future.” On June 9, TikTok announced a new feature that will prompt users to take a break after a certain amount of time has passed.

Tapia, the teen from Tucson, wants more opportunities to pause. It would give her more time to reflect on whether she actually wants to keep scrolling, or is just being strung along on an app. It would have been helpful, she said, one night when she was scrolling on TikTok in her room, and her mom asked if she wanted to watch a movie together. Tapia said no. Later, she went to the kitchen for a glass of water and saw her mom, her dad, and her two younger brothers snuggled together in front of the TV. Oh my God, she remembers thinking. I just chose TikTok over my family. She closed the app and joined them on the couch.

No comments:

Post a Comment