An AI-powered tool designed to target trademark violations on social media was used to silence critics of SXSW, the massive annual tech, music and film conference in Austin, Texas.
Each year in March, SXSW takes over Austin. This year, thanks to the demolition of the city’s aging convention center, events sprawled to more locations than usual, from hotel ballrooms to vacant lots. But the character of SXSW has changed, growing more corporate and less accessible since its relatively humble origins in 1987, and today it has numerous detractors. This year some of those dissenting voices found themselves targeted by BrandShield, a “digital risk protection” service that claims to use artificial intelligence to automate the process of identifying and removing social posts that misuse trademarks.
Among the groups to receive a social media takedown notice was Vocal Texas, a nonprofit dedicated to ending homelessness, HIV, poverty and the war on drugs. On March 12, members of the group set up a mock encampment in downtown Austin, to draw attention to the possessions that unhoused people can lose during “sweeps,” when police and city officials clear out and destroy or confiscate their tents and other lifesaving supplies.
An example of an image deleted by Instagram
An Instagram post by Vocal Texas read, “SXSW means unhoused Austinites in downtown face encampment sweeps, tickets and arrests while the City makes room for billionaires and corporations to rake in profits.” The accompanying image promised an art installation called “Sweep the Billionaires,” and does not use SXSW’s logos.
Even so, the mere mention of SXSW was apparently enough to flag BrandShield’s trademark detection service, resulting in the post’s fully automated removal from Instagram. Cara Gagliano, a senior staff attorney who specializes in trademark and intellectual property law at the Electronic Frontier Foundation said that posts like these do not violate SXSW’s trademark.
“You’re allowed to use a company’s name to talk about the company, right?” Gagliano told 404 Media. “How else are you going to do it?”
Gagliano noted that trademark law has specific carveouts for exactly this kind of critical speech. “Examples like that, where it's not (for example) advertising a concert with a name similar to South by Southwest ... are pretty clearly over-enforcement,” she said.
EFF interceded in March 2024 when the Austin for Palestine coalition received a cease and desist letter from SXSW, accusing them of infringing on the conference’s trademark and copyright. The coalition, which was involved with organizing successful protests against the festival’s sponsorship by the U.S. military, had made social media posts featuring SXSW’s trademarked arrow logo reimagined with bloodstains, fighter jets, and other warlike imagery. The EFF wrote a letter on the coalition’s behalf, and the group never heard from SXSW again.
But Gagliano explained that this situation is different from the takedown notices sent by BrandShield. “When it's a threat sent to ... the person who made the allegedly infringing use, them going away is a victory for the client because nothing bad happens to them, but when you have these takedowns ... [while] it's good that they didn't go even further and file a lawsuit, they also don't have any incentive to retract the complaint, and so the content stays down.”
This year, many of the protests and “counter events” were organized by a very loosely associated coalition of groups called Smash By Smash West, which included Vocal Texas along with many others, from musicians and independent movie directors to event venues.
404 Media reached a representative of Smash By Smash West via Signal who used the name “Burnice.” We agreed to protect their anonymity, but verified that they were involved with the organizing of Smash By events. Operating since 2024, Smash By has no leaders and essentially anyone can organize an event under its umbrella. This year, there were over 100 events, according to Burnice. “It is a decentralized call to action and a platform that enables promotion and connecting together all of these different events.”
Smash By Smash West provided us with dozens of screenshots of Instagram takedown notices as well as many of the posts which had been removed.
BrandShield’s software enables mass reporting of potentially infringing content, with reports in turn evaluated by Instagram’s automated moderation systems. Despite their obviously automated nature, BrandShield claims to use a “dedicated enforcement team of IP lawyers” to ensure that takedowns are “timely, targeted and fully compliant.”
The BrandShield website reads, “Whether it's a distorted logo, a counterfeit image, or a cloned storefront, our proprietary image recognition technology scans marketplaces, social media, paid media, and mobile environments to catch threats at the source.”
However, despite these assurances, it seems clear that BrandShield’s trademark targets with a very broad brush, and seems incapable of distinguishing between trademark violations and protected free speech. Although BrandShield initially connected us with their public relations department, they did not respond to repeated requests for comment including an emailed list of inquiries.
Instagram’s automatically generated takedown notices include the sentence, “If you think this content shouldn’t have been removed from Instagram, you can contact the complaining party directly to resolve your issue.” However, there is a link allowing the recipient to appeal the takedown, which then leaves it up to Instagram moderators’ discretion if it returns.
Gagliano explained that this is a crucial area where trademark differs from copyright law. Thanks to the Digital Millenium Copyright Act (DMCA), there’s a clear (though often arduous) path to contesting false claims of copyright violations which allows content creators to get their posts put back. There’s no similar, mandatory pathway written into trademark law. “There's no counter notice process where they say, ‘Okay, you told us this is fair use, so we'll put it back up.’ And that's a really frustrating thing,” Gagliano said.
Mathew Zuniga, who does most of the booking for Tiny Sounds Collective, an organization that throws free DIY music shows and publishes zines, said he struggled with the process offered by Instagram after a post about a Tiny Sounds’ Smash By concert was taken down.
“I tried to do it,” he said. “It didn't really go through.“
When he reposted the same image and text, but without tagging Smash By Smash West’s Instagram account as a collaborator, the post remained online.
“I think it’s silly, as if these DIY shows in a bookstore are pulling anyone away from South By,” Zuniga said. “I think it was more of a deliberate attempt to take down anti-South By Southwest rhetoric online.”
When reached for comment, SXSW’s PR team sent back a prepared statement, noting that the law requires them to “take reasonable steps” to enforce their trademarks.
“SXSW’s efforts are not intended to limit commentary, criticism, or independent reporting, and we respect the importance of free expression,” the spokesperson’s statement continued. “We use third-party services, including BrandShield, to help identify potential issues at scale, and we recognize that errors can occur."
By contrast, Burnice explained that, rather than trying to steal SXSW’s trademark, Smash By Smash West makes it a condition that participants can’t describe their events as free or alternative SXSW events. “Smash By ... was an attempt to politicize the DIY scene, the ‘unofficial’ South By shows, and make them explicitly anti-South By.”
Smash By provides alternative logos, some of which are wholly unique but others based on parodying or “detournements” of the SXSW logo, similar to what the Austin for Palestine coalition did in 2024. Burnice expressed their frustration with the automated nature of the quashing of dissent this year.
“All of that is actually just happening by robots talking to robots,” they said. “It's an AI system that mass reports these accounts, and then, you know, probably an AI system at Instagram that just sorts through, and approves or rejects.”
For her part, Gagliano expressed skepticism over whether artificial intelligence plays a major or important role at companies like BrandShield beyond just its current popularity as a tech buzzword. ”I haven't seen any kind of change in the volume of requests for help that we're getting, and this is one thing where I'm a little skeptical that it's really made much difference, because they were already using automated tools before, and I think in any instance, the tools are not going to be able to reliably determine what's actually infringement.”
Armin Himmelrath at Der Spiegel writes up German publisher Kohl-Verlag’s wonderful new line of textbooks for kids with learning disabilities! Kohl’s been promoting these just in the last month or so. [Spiegel, in German, archive]
The authors and illustrators don’t seem to exist. One author photo turned out to be a stock image.
The books seem to have passed through editors who don’t exist either — because the books are AI slop, with really obvious AI-style errors that would have been spotted instantly if a single human had looked.
One picture has a friendly teacher in a classroom. She’s got six fingers on one hand. How long is it since we saw an AI picture with six fingers in the wild? The picture also has a child’s head on a bookshelf.
The kids the books were for spotted the errors straight away. “Eww, there’s a head on the shelf!”
There’s an AI hallucination zoo with baby elephants without trunks. And some weird animal-thing that might be the world’s most messed-up capybara.
The worksheets feature confusing or impossible problems. There’s one picture that shows the kids how to add small numbers by counting dots. The text “5+2=” has an image showing four dots and two dots.
One page has the heading “We count to 10”, and you’re supposed to count the objects. There are 10 of none of the objects There’s 23½ of the candies. Yeah, they just left the half a candy there.
Kohl also did an AI slop textbook on World War II for ages 8 to 11. There’s a great picture with Adolf Hitler glaring out of the image. He’s holding a pen and apparently writing a book which says “MEIN KAMPF” in big letters on the page, written upside down from his perspective. Also, the book has two spines. Also, there’s a map behind Hitler that says western Europe is Russia.
The author of the history textbook got in touch with Der Spiegel. He says he did write the text without AI. And he did not do the AI slop illustrations. Those were picked by the publisher.
Kohl has since removed the books from sale. So that’s nice. They’ll also examine their editorial procedures. For instance, they might put some into place. [WDR, in German]
A year and a half ago, we teamed up with Lennon Torres, senior campaign manager at The Heat Initiative and LGBTQ+ advocate, to write an article for The Atlantic, “Social-Media Companies’ Worst Argument” (reposted here with no paywall). Together, we refuted the social media companies’ claims that using these platforms is net-positive for teens in historically disadvantaged communities and that regulation would do more harm than good for adolescents in these groups.
Since then, however, these claims have continued to surface as an argument against regulation. In the below piece, originally published by The Hill, Lennon draws on her own experience as a trans woman who grew up sharing her life on social media. She argues that the social media companies use LGBTQ+ kids as an excuse to avoid accountability and reminds the public that despite what the companies claim, “queer people are the ones these platforms fail first and protect last.”
Thank you to Lennon and The Hill for allowing us to share this piece directly with After Babel’s readers. We hope you’ll read it and share it widely.
– Jon & Zach
Credit: Iv-olga/Shutterstock.com
Don’t Let Big Tech Hide Behind a Rainbow Flag
With Big Tech companies recently losing two key lawsuits over the harm they do to youth — both in rulings they have promised to appeal — a falsenarrativehas begun to re-circulate. The claim is that requirements making digital communities safer for young people will somehow undermine queer expression.
Here is my message, coming from a transgender woman who grew up with and was badly harmed by exploitative social media: Do not let Big Tech hide itself behind a rainbow flag. The truth is, queer people are the ones these platforms fail first and protect last.
Many gay, transgender and queer kids lack supportive families and affirming schools. To them, digital spaces may seem like a lifeline — a place where they can be themselves. Unfortunately, those digital spaces are often built on the same logic that once targeted kids with cigarettes: Maximize use, minimize accountability and monetize vulnerability. These platforms were designed not to empower us but to get and keep us hooked.
In the social media addiction trial that recently wrapped up in Los Angeles, plaintiff attorney Mark Lanier asked Meta whistleblower Arturo Béjar how Facebook’s leadership dealt with the issue of “addiction.” Béjar replied: “They changed the name of it” — specifically, they stopped calling it “addiction” and called it “problematic use” instead. He added, “You couldn’t talk about it.”
I joined social media at age 13, just as the iPhone became the center of adolescent life. I was attending a performing arts school after five years at a public school where I was teased for being too feminine. I turned to Instagram, Facebook, Snapchat, and YouTube — platforms that gave me access to a community I had never had. But this came with life‑threatening side effects I couldn’t yet see clearly.
Online, I found attention — first from classmates, then from strangers. When I started working professionally as a dancer, hundreds of thousands of followers watched my every move. What felt at first like affirmation quickly became the only place I thought I had value. I got so consumed with how I was being perceived that authenticity didn’t stand a chance.
At some point, it stopped mattering whether the comments were praise or cruelty — what mattered was the hit. I began refreshing comments in bathroom stalls between classes and rehearsals, scrolling before bed and learning how to curate myself for algorithms I didn’t understand. The behavior was compulsive. I didn’t know to call it “addictive design” — I just knew I couldn’t stop scrolling.
Chasing the algorithm for validation wasn’t the only risk. The real danger often arrived in my private messages. Adults I didn’t know approached me with explicit messages and nude images. I was only 13, and I did not yet understand what grooming was. I did not have the language for it — I only knew that the attention I could not find offline seemed to appear online.
I know now that the platforms and their algorithms were delivering me up to these predatory strangers, serving them my profile as engagement bait.
The Los Angeles lawsuit pointed to Internal Meta documents showing that Instagram’s “Accounts You May Follow” feature1 actively connects predatory adults to minors: “In 2023, this tool recommended to adult groomers ‘nearly 2 million minors in the last 3 months’ — and ‘22 percent of those recommendations resulted in a follow request.’”
Employees warned leadership. Leadership rejected fixing the system, maintaining a 17-strike policy for predators — including sex-traffickers — before suspending the offenders’ accounts.
The architecture of these platforms placed me in the path of adults who saw opportunity in a lonely queer kid. Because queer kids come to online spaces for identity and survival, we are the ideal product: highly engaged, highly vulnerable and highly profitable.
Big Tech claims to defend queer kids’ rights by opposing regulations like requiring age-appropriate design and limits on addictive features. In reality, they are using us as a shield to avoid accountability. They weaponize our dependence on online connection to argue that any safety guardrail is “anti‑LGBTQ.” They warn lawmakers that protecting kids will erase queer expression. This is a lie, and a strategic one.
In reality, features that harm young people — endless scroll, autoplay, compulsive engagement loops, recommendation pipelines driven by surveillance data, settings that expose kids to ill-intentioned adult strangers — do not create queer communities. They create dependency. They bury our identity in algorithms optimized for outrage, objectification and profit.
Big Tech claims to defend queer kids’ rights by opposing regulations like requiring age-appropriate design and limits on addictive features. In reality, they are using us as a shield to avoid accountability.
Queer kids do not need online platforms that claim to celebrate us in Pride campaigns while exploiting and exposing us to harassment at disproportionate rates. We need them to prioritize our safety and mental health.
I know this because I lived it. Only after a decade of anxiety, addictive patterns, algorithmic harm, grooming, and harassment could I finally withdraw from exploitative social media. Even then, the choice felt impossible. Most of my childhood had unfolded online. The most intimate parts of my life — my gender transition, top surgery, and coming out — became content opportunities to me. That is the cruelty of these platforms: They teach you to equate visibility with safety, engagement with belonging, and exploitation with connection.
Regulation is not a threat to queer expression but a prerequisite for queer safety. It won’t solve every problem, but it will do the first and most important thing: force the companies profiting from our attention to finally take responsibility for the harm they have caused.
Two weeks ago, on March 25, a jury in Los Angeles found Meta and Google liable in a landmark case. The jurors determined that the parent companies of Instagram and YouTube had acted with “malice, oppression, or fraud,” addicting and harming the young plaintiff, known as KGM.
Just one day prior, a jury in New Mexico found Meta liable for “misleading consumers about the safety of its platforms and endangering children.”
Many kinds of evidence were presented to the juries, from internal documents and research done by the companies themselves to testimony from experts and former employees. The evidence revealed that the companies had intentionally designed their products in ways they knew would harm children.
The companies used a two-pronged defense strategy. First, they blamed others: It was KGM’s fault for opening accounts before she was 13. It was her parents’ fault that she got addicted and depressed. Whatever harm happened, we’re just a neutral platform! The jury did not respond well to this strategy.
Second, they claimed that there is no scientific evidence that their platforms cause harm to adolescent mental health. Mark Zuckerberg has repeatedly assertedthat the academic evidence is merely correlational. He grants that heavy users are more depressed, but notes that correlational evidence cannot prove that social media caused their depression.
There are thousands of similar cases coming, and we can be confident that the companies will lean hard into this strategy: denying any scientific evidence of causation. When making such claims, defenders of social media usually refer to an essay in Nature that made similar assertions. But as we showed in The Anxious Generation, and in our academic articles and many posts here on After Babel, there is abundant scientific evidence of causation. We are writing this post to make it easier for everyone to learn about that evidence.
The editors of TheWorld Happiness Report (WHR) recently asked us to put all of the evidence together. The annual report shows how countries vary on measures of well-being. Each year there is a special topic or focus, and for the 2026 report, the focus was on social media’s effects on well-being. We wrote the target essay laying out the case for harm, and other authors brought a variety of perspectives.
Knowing that thousands of jury trials were on the horizon, we laid out our argument like a hypothetical civil trial, asking our imagined jury this question: Are social media platforms dangerous consumer products whose design has led to a variety of harms to young people? We call this the Product Safety Question. We present seven lines of converging evidence showing that these platforms are causing harm.
At the end of our chapter, we show that the levels of harm uncovered while answering the Product Safety Question are so high that we can also answer a different but related question: Are social media platforms causing harm to entire populations? We call this the Population Harm Question, and it’s at the center of some states’ and school districts’ cases.
Taking the Companies to Trial
In our hypothetical case against the companies — particularly Instagram, TikTok, and Snapchat — we begin with the apparent victims, the people who allege harm: Gen Z, the cohort born roughly between 1996 and 2011. They were the first generation to go through puberty with social media in their pockets, accessible at all times through smartphones beginning in the early 2010s. They have the clearest view of what happened to them and their peers.
We then turn to those who spend the most time with young people — parents, educators, and clinicians. They also witnessed the effects of social media across many young people, over many years.
If we could call all of these groups to the stand, what would they say? We offer a brief synopsis of each line of evidence below. You will find far more detail in our WHR chapter.
Across surveys in multiple countries, many young people report that social media has harmed them directly and indirectly. They describe widespread experiences of cyberbullying, sexual exploitation, sleep disruption, lower confidence, and worse mental health. They also express strikingly high levels of regret toward the major platforms they have used for years. In a Harris Poll survey of members of Gen Z, nearly half reported that they wish that TikTok, X (Twitter), and Snapchat were never invented — despite using those platforms for several hours a day.
Figure 1. Nearly half of Gen Z young adults wish that X, TikTok, and Snapchat were never invented. Source: Harris Poll, via The New York Times
Internal surveys conducted by Meta found similar results. In their ownresearch, they found that “teens blame Instagram for increases in the rates of anxiety and depression among teens.” One in three teen girls said Instagram made their body-image issues worse (20% said it made it better); and 13% of adolescents reported unwanted sexual advances on Instagram in the previous seven days.
In a courtroom, it is powerful when a victim points to the defendant and says “he did it.” In survey after survey, and in open-ended interviews, Gen Z points to social media platforms as the culprit.
Of course, the victims in a court case could be mistaken or could be lying, so direct positive identification is strengthened when corroborated by eyewitness testimony. The same logic applies here, so let’s move to our second line of evidence and call a variety of witnesses to the stand.
Line 2: What the Eyewitnesses Say
We next turn to the adults who spend the most time with young people. Parents describe changes in their children’s mood, sleep, self-esteem, and friendships; teachers report worsening distraction, attention, and academic performance; and clinicians say social media is exacerbating anxiety, depression, and addiction-like behavior in their young clients.
A 2025 Pew survey of U.S. teens and their parents found that 44% of parents identified social media as the single most negative influence on teen mental health, ahead of “technology generally.”Similarly, the 2025 UK survey by More in Common asked parents to identify what most negatively affects their own children’s mental health. The top response was “social media use/excessive screen time,” followed by concerns closely linked to digital technology, including exposure to harmful online content, bullying, low self-esteem, and lack of sleep.
In our own Harris Poll survey, majorities of parents said that, when thinking about their own children, they wished the major social media platforms had never been invented. And according to findings disclosed in litigation, Meta’s own research found that large majorities of clinicians believed social media worsens anxiety and depression in adolescents.
Figure 2. 1,013 U.S. parents were asked to reflect on the role of various products in their children’s lives by considering the sentence: “When I think about my child’s experience growing up, I wish ____ had never been invented.” A majority of parents said they wished social media had never been created. For TikTok and X, 62% of parents expressed regret — higher than for alcohol and equal to guns. Source: Harris Poll.
Line 3. What Company Insiders Say
The attorney for the plaintiff might then call the defendant to the stand and turn to the direct evidence. Suppose, for example, that the attorney had obtained, through pre-trial discovery, a series of text messages from the defendant describing what he was planning on doing, and then, afterward, talking about what he had done.
In our case against the social media companies, we have the equivalent of hundreds of such text messages in the form of internal company emails, messages, memos, documents, presentations, and more.
Here are just a few of the quotations from internal documents revealing what company insiders — employees as well as external consultants hired to offer advice — believed.
“Oh my gosh yall IG is a drug […] We’re basically pushers […] We are causing Reward Deficit Disorder bc people are binging on IG so much they can’t feel reward anymore […] like their reward tolerance is so high […] I know Adam [Mosseri] doesn’t want to hear it — he freaked out when I talked about dopamine in my teen fundamentals leads review but its undeniable! Its biological and psychological […] the top down directives drive it all towards making sure people keep coming back for more. That would be fine if its productive but most of the time it isn’t […] the majority is just mindless scrolling and ads.”
“There are reasons to worry about self-control and use of our products” and presenting a “quick rundown of evidence” – including “[a]n experiment [which] found that a 1-month break from Facebook improved self-reported wellbeing.” In response, another senior data scientist at Meta (who also holds a PhD in neuroscience, and taught a university course on addiction) warned: “It seems clear from what’s presented here that some of our users are addicted to our products. And I worry that driving sessions incentivizes us to make our product more addictive, without providing much more value. How to keep someone returning over and over to the same behavior each day? Intermittent rewards are most effective (think slot machines) reinforcing behaviors that become especially hard to extinguish – even when they provide little reward, or cease providing reward at all.”
“[A]round 10,000 user reports of sextortion each month,” and “that 10k monthly reports likely represents a small fraction of this abuse as this is an embarrassing issue that is not easy to categorize in reporting.”
“Compulsive usage correlates with a slew of negative mental health effects like loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy, and increased anxiety”, in addition to “interfer[ing] with essential personal responsibilities like sufficient sleep, work/school responsibilities, and connecting with loved ones.”
These quotes barely scratch the surface of what the internal documents reveal, and we cover more from this line of evidence in our WHR chapter. You can also find a large selection of disturbing quotations at TechOversight.org, and you can find our compilation of 35 studies carried out by Meta at MetasInternalResearch.org.
The evidence is clear: The companies and their leaders knew from their own research that they were harming millions of children and adolescents. As former Facebook president Sean Parker said, they knew what they were doing, and they did it anyway.
These three lines of evidence taken together, we believe, answer the Product Safety Question and demonstrate that these products are not safe for minors. Few parents who knew about the above evidence would want their children to continue using these products. That may be why many tech executives do not let their children use their own products: they know. But there’s no need to stop here; the forensic evidence further strengthens our case.
In Lines 4 through 7 of the evidence, we focus on the heart of the academic debate over social media’s effects: whether heavy social media use (~5 or more hours per day) is causing internalizing disorders (such as anxiety and depression) among adolescents (especially girls).1There is wideagreementamongacademic researchers that heavy users of social media are more likely to be depressed and anxious than light users, but does that mean that social media causes those outcomes, or is it merely correlated with them? The claim that it is mere correlation is at the heart of the social media companies’ legal defense strategy.
To address that question, we examine the four major bodies of academic research in turn: cross-sectional studies, longitudinal studies, randomized controlled trials of social media time reduction, and natural experiments.
At this point in our case, we are calling on the forensic experts to give their scientific analysis and opinions of the evidence, which can help connect the defendant to the alleged harm. In a criminal trial, this might be a ballistics or DNA expert; in our case, we’re calling the academic researchers to the stand. They’ve studied social media and internalizing disorders in teens for more than a decade, and though their access to data is more limited than that of the companies, their expert analysis consistently links the defendants to the alleged harm.
Line 4. Cross-sectional Studies
The largest body of academic evidence is cross-sectional, which means that data is collected at a single time (as with a survey), with no experimental manipulation. While these studies cannot establish causation on their own, they are an important starting point: they ask whether heavy users of social media are in worse mental health than light users or non-users. Across hundreds of studies, the answer is generally yes. The main point of contention, however, is not whether an association exists, but how seriously to take it.
In one of the most informative studies, Kelly et al. (2019) analyzed data from 10,904 14-year-olds in the UK Millennium Cohort Study and found that adolescents who spent five or more hours a day on social media were about twice as likely to meet criteria for depression as those who used it for less than one hour a day. Among girls, the relative risk was even higher at 2.65 — comparable to sleep deprivation and online harassment, and larger than the risk elevation associated with poverty.
Figure 3. Adolescents who spent five or more hours per day on social media were about two times more likely to meet criteria for depression than those who used it for less than one hour per day. Source: Kelly et al. (2019)
Additional studies reinforcethisconclusion. These elevated risk findings were central to the U.S. Surgeon General’s warnings in 2023 and 2024.
Even the studies that our critics cite as finding “no association” between social media use and internalizing disorders in teens look much more concerning when the data is analyzed more carefully, as we show in Exhibit J of our WHR essay. In many cases, researchers blend together variables — for example, different technologies (e.g., email and social media), different outcomes (e.g., general feelings of wellbeing and anxiety), or different populations (e.g., adults 18+ and teen girls) — in ways that dilute the relationship at the center of the debate: heavy social media use associated with internalizing disorders, especially among adolescent girls. Analyses that unblend these categories almost always reveal that heavy teen social media users — and especially girls — are at substantially elevated risk for depression and anxiety. (See Haidt & Rausch, preprint for a deeper examination of blending).
Cross-sectional studies consistently show that heavy adolescent social media users are at substantially elevated risk for depression and anxiety. Next, we turn to the longitudinal studies, which help address the question of temporal order.
Line 5. Longitudinal Studies
The longitudinal literature on social media and mental health allows researchers to follow individuals over time and can help clarify whether social media use predicts subsequent changes in mental health, whether poor mental health predicts subsequent social media use, or some combination of the two. The available longitudinal studies present clear and consistent evidence that social media use predicts later depression.
The strongest evidence comes from recent large-scale studies. An analysis of a sample of 6,595 U.S. adolescents, ages 12–15, found that heavy social media use predicted later increases in internalizing symptoms. Another study, using the longitudinal Adolescent Brain Cognitive Development (ABCD) dataset, showed that increases in social media use predicted subsequent increases in depression. Meanwhile, other researchers using the ABCD dataset showed that earlier internalizing disorders failed to predict subsequent social media use.
Some studies also find bidirectional relationships (i.e., higher social media use today predicts worse mental health a year from now, and worse mental health today predicts higher social media use a year from now), and within those studies, the forward relationship from social media use to later depression remains robust.2
In other words, this second line of forensic evidence shows that not only are heavy users of social media doing worse, at any given time (that’s the cross-sectional finding); it’s also the case that those who use more social media at one point in time are generally found to be worse off at later times.
Line 6. Randomized Control Trials of Time Reduction
The most powerful tool for measuring causation directly is an experiment that randomly assigns participants to either an intervention or to a control condition and then compares the outcomes. While researchers do not, for ethical reasons, ask one group of kids to start using social media at age 10 and another to stay off it until age 16, there are numerous experiments where young adult participants have been asked to either reduce their social media use (intervention) or continue their use as usual (control condition).
A recent meta-analysis by Burnell et al. (2025) of 32 such experiments has shown that reductions of social media use caused substantial declines in symptoms of internalizing disorders like depression and anxiety — even though most of these studies lasted only a week or two.3
The experimental results are all the more remarkable given that these studies are not designed to measure impacts that could be produced by entire communities reducing their use of social media. For example, if all students in a given school district ceased to use social media, that would leave more overall time for in-person interactions with peers and therefore the beneficial impacts on mental health could be even stronger, including for students with low levels of social media use. Furthermore, kids who do not use social media would cease to be penalized for their inability to socialize with their peers on these platforms, which in turn might help improve their mental health.
Even Meta’s own internal research confirmed evidence of benefits caused by social media reductions. In a2020 Facebook deactivation experiment, code-named Project Mercury, Meta found that users who stopped using Facebook or Instagram for just one week reported lower feelings of depression, anxiety, loneliness, and social comparison. One internal researcher warned that keeping such findings secret would resemble the refusal by tobacco companies to admit that their own research revealed severe harms of cigarette consumption.
This sixth line of evidence is arguably the most damning: experiments using random assignment provide consistent causal evidence that when users reduce the amount of time they spend on social media, their mental health improves. The defendants themselves found this in their own internal experiments, and they tried to bury it.
Line 7. Natural Experiments
Our final line of evidence comes from natural experiments. Because high-speed internet made social media much more appealing (photos and videos would load faster), if some regions of a country got broadband connections a year or two before other areas, researchers can compare: did the mental health of young people in those early adopter regions change before those of the later regions? These studies are especially valuable because they offer population-level evidence that is not available from short-term laboratory experiments.
Across the major natural experiments we reviewed — in Germany, Italy, Spain, and the United States — the evidence indicates that the spread of high-speed internet worsened mental health, with the harms falling most heavily on young people, especially women and adolescent girls. Documented effects include declines in self-reported mental health, increases in hospital-diagnosed mental disorders, and rising suicide rates. Additional naturalexperiments point in the same direction.4
This final line of forensic evidence may be the most policy-relevant of all, because it allows us to examine what happened as these technologies actually spread through entire populations. It comes closest to the ideal experiment of having one group of adolescents gain access to always-available social media while another does not. And the results are again clear: as high-speed internet spread — and with it, ever-present social media — mental health outcomes worsened, especially for young people and especially for girls.'
Our seven lines of evidence make it clear: the answer to the Product Safety Question is No, social media platforms are not safe for young people. These consumer products were designed — intentionally — to maximize the number of children and adolescents who would be drawn to them and the amount of time that each would spend on them. The leaders and researchers at these companies know that heavy users of social media suffer many indirect harms (mental health problems, body image issues, addiction), and that even light users are often exposed to dangerous direct harms (such as sextortion, or death from purchasing fentanyl-laced drugs, or performing a dangerous challenge).
The Population Harm Question is a different one. It is quite possible for a consumer product to be extremely dangerous and yet have no effect on the aggregate statistics of a nation. That would be the case for any product that is used by only a tiny portion of the population. But social media platforms are arguably the most widely used products among young people in the developed world, used regularly by a large majority of adolescents in the United States. In fact, a third of American adolescents say that they are on one of the major platforms “almost constantly.” So if several of the product safety concerns we have documented are affecting more than 20% of all users (as with self reports of sleep deprivation and mental health damage), that quickly adds up to a population-level effect.
When the documented direct and indirect harms are scaled to the number of young people actually using these products, the number of adolescents harmed each year likely reaches into the millions in the U.S. alone. Arturo Béjar’s internal Instagram research found that 13% of users ages 13–15 reported receiving unwanted sexual advances in the previous week — which, if the U.S. is similar to the global average, would imply that about 5.7 million adolescents experience this in any given week. This same research also found that 10.8% of Instagram users ages 13–15 reported being cyber-bullied in the previous week. The number of adolescents experiencing direct harms from social media likely exceeds 10 million each year in the United States alone. (See the subsections in our WHR chapter “Direct harm to millions” and “Indirect harm to millions” for more extensive examples and estimates).5
In other words: the answer to the Population Harm Question is very likely to be “yes.”
The evidence we have presented does not prove that any particular plaintiff is correct, and it does not mean that evidence does not exist on the other side. We have been engaged in a debate with other researchers for seven years now, and you should read their arguments to hear the other side. Scientific debates are never closed; there is always the possibility of new evidence, or of discovering new complications and interactions.
But the next time you hear Mark Zuckerberg or anyone else say that there is “no evidence” of harm, or that the evidence is merely “correlational,” send them a link to this essay, or to our full WHR chapter. There is now a great deal of evidence, from many sources (including Meta’s internal research), using many methods.
Social media companies have been harming millions of children and adolescents for many years now. Until very recently, they faced no liability for these harms, and they never faced a jury. But now the courtroom doors are finally open and the evidence is being seen — by juries and the world. As the punitive damages increase, there will be design changes to the platforms. And there will be justice.
After Babel is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
We focus here and in the WHR essay on internalizing disorders in adolescents, specifically depression and anxiety. There are, of course, many other important questions that deserve attention, including social media’s effects on cognition, attention, sleep, and social skills. But the central and most heated debate among academic researchers since Jean Twenge’s 2017 article in The Atlantic has been whether and how social media use is linked to depression and anxiety among adolescents, especially girls.
The above facts contradict one of the most influential opponents of social media concerns, Candice Odgers, who has repeatedlyasserted that social media use does not predict mental health in longitudinal studies. Odgers also asserted that when there is any temporal relationship revealed by longitudinal studies, it is that of mental health problems predicting later social media use, therefore suggesting reverse causality. Statistician Alec McClean and Jakey Lebwohl showed that the studies Odgers cites actually provide little if any evidence in her support (see “Does Social Media Use at One Time Predict Teen Depression at a Later Time?”). Furthermore, they point out that Grund & Luciana 2025 revealed that internalizing psychopathology was not associated with later social media use. Note that Nagata, as well as Grund & Luciana, analyzed the high-quality Adolescent Brain Cognitive Development (ABCD) data sets. ABCD is a long-term U.S. cohort study tracking more than 10,000 children beginning in 2015–2016, when participants were ages 9–10 (it is still ongoing).
It is important to note that, on their own, longitudinal studies do not measure causality. One may ask, however, if the data is compatible with assumptions about causality; and one can use results from longitudinal studies in more general arguments about causality (such as using the Bradford Hill criteria). To the best of our understanding of current literature, most longitudinal studies are consistent with, and provide support for, theories of harmful social media use among children and adolescents.
In the Appendix for our WHR chapter, we argue the results of the Burnell meta-analysis may plausibly translate to declines of internalizing disorders by roughly one-third in the intervention groups. Since the requirements for participation in these experiments were typically just one to two hours of daily social media use, these mental health improvements could apply to nearly the entire population of teens (in view of their reported usage of social media). We note that these effect sizes are similar to those found in estimation of childhood maltreatment effects on depression and anxiety (see the Appendix for details).
We found only one studysuggesting an overall positive effect of broadband expansion in the United States from 2000 to 2008. But even that study’s authors attributed the gains primarily to improved local economic conditions — such as lower unemployment, less poverty, and greater business activity — rather than to internet or social media use itself.
Even these estimates may understate the true burden. Many teens are stuck in a collective action trap: once nearly everyone is on the platforms, young people cannot simply leave without losing social connection, thus the cost of leaving increases even though it would otherwise be beneficial. We also argue that the harms of social media appear to be especially severe and long lasting when they occur during puberty, a time when adolescents are particularly sensitive to social comparison and peer belonging.
Arizona State University rolled out a platform called Atomic that creates AI-generated modules based on lectures taken from ASU faculty by cutting long videos down to very short clips then generating text and sections based on those clips.
Faculty and scholars I spoke to whose lectures are included in Atomic are disturbed by their lectures being used in this way—as out-of-context, extremely short clips some cases—and several said they felt blindsided or angered by the launch. Most say they weren’t notified by the school and found out through word of mouth. And the testing I and others did on Atomic showed academically weak and even inaccurate content. Not only did ASU allegedly not communicate to its academic community that their lectures would be spliced up and cannibalized by an AI platform, but the resulting modules are just bad.
💡
Do you know anything else about ASU Atomic specifically, or how AI is being implemented at your own school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.
AI in schools has been highly controversial, with experiments like the “AI-powered private school” Alpha School and AI agents that offer to live the life of a student for them, no learning required. In this case, the AI tool in question is created directly by a university, using the labor of its faculty—but without consulting that faculty.
“We are testing an early version of ASU Atomic to learn what works, and what doesn't, to further improve the learner experience before a full release,” the Atomic FAQ page says. “Once you start your subscription, you may generate unlimited, custom built learning modules tailored specifically to your learning goals and schedule.”
The FAQ notes that ASU alumni and those who “previously expressed interest in ASU's learning initiatives or participated in research that helped shape ASU Atomic” were invited to test the beta. But on Monday morning, I signed up for a free 12 day trial of the Atomic platform with my personal email address — no ASU affiliation required. I first learned about the platform after seeing ASU Professor of US Literature Chris Hanlon post about it on Bluesky.
“When I looked at it, I was really surprised to see my own face, and the faces of people I know, and others that I don't know” in module materials generated by Atomic, Hanlon said. It had clipped a one-minute snippet from a 12 minute video he’d done as part of a lecture mentioning the literary critic Cleanth Brooks, which the AI transcribed as “Client” Brooks. “What was in that video did not strike me as something anyone would understand without a lot more context,” Hanlon said. When he contacted his colleagues whose lecture videos were also in that module, they were all just as shocked and alarmed, he said. “I mean, it happens to all of us in certain ways all the time, but have your institution do it—to have the university you work for use your image and your lectures and your materials without your permission, to chop them up in a way that might not reflect the kind of teacher you really are... Let alone serve that to an actual student in the real world.”
The videos appear to be scraped from Canvas, ASU’s learning management system where lecture materials and class discussions are made available to students. Canvas is owned by Instructure, and is one of the most popular learning management systems in the country, used by many universities. “ASU Atomic currently draws from ASU Online's full library of course content across subjects including business, finance, technology, leadership, history, and more. If ASU teaches it, Atom—your AI learning partner—can build a hyper-personalized learning module around it,” the Atomic FAQ page says.
As of Monday afternoon, after I reached out at the ASU Atomic email address for comment, signups on Atomic were closed. I could still make new modules using my existing login, however.
In my own test, I went through a series of prompts with a chatbot that determined what I wanted my custom module to be. I told it I was interested in learning about ethics in artificial intelligence at a moderate-beginner level, with a goal of learning as fast as possible.
Atomic generated a seven-section learning module, with sections that repeated titles (“Ethics and Responsibility in AI” and “AI Ethics: From Theory to Practice”). The first clip in the first section is a two-minute video taken from a lecture by Euvin Naidoo, Thunderbird School of Management's Distinguished Professor of Practice for Accounting, Risk and Agility. In it, Naidoo talks about “x-riskers,” who he defines as “a community that believes that the progress and movement and acceleration in AI is something we should be cautious about.” Atomic’s AI transcribes this as “X-Riscus,” and transfers that error throughout the module, referring to “X-Riscus” over and over in the section and the quiz at the end.
The next section jumps directly into the middle of a lecture where a professor is talking about a study about AI in healthcare, with no context about why it’s showing this:
In a later section, film studies professor and Associate Director of ASU’s Lincoln Center for Applied Ethics, Sarah Florini, appears in a minute-long clip from a completely unrelated lecture where she briefly defines artificial intelligence and machine learning. But the content of what she’s saying is irrelevant to the module because it came from a completely unrelated class and is taken out of context.
“It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true"
“This was a video from one of the courses in our online Film and Media Studies Masters of Advanced Study. The class is FMS 598 Digital Media Studies. It is not a course about AI at all,” Florini told me. “It is an introduction to key concepts used to study digital media in the field of media studies.” She recorded it in 2020, before generative AI was widely used. “That slide and those remarks were just in there to get students to think of AI as a sub-category of machine learning before I talked about machine learning in depth. That is not at all how I would talk about AI today or in a class that focused more on machine learning and AI tech technologies,” she said. “It’s really a great example of how problematic it is to take snippets of people teaching and decontextualize them in this way.”
Florini told me she wasn’t aware of the existence of the Atomic platform until Friday. “I was not notified in any way. To the best of my knowledge no faculty were notified. And there was no option to opt in or out of this project,” she said.
Another ASU scholar I contacted whose lecture was included in the module Atomic generated for me (and who requested anonymity to speak about this topic) said they’d only just learned about the existence of Atomic from my email. They searched their inbox for mentions of it from the administration or anyone else, in case they missed an announcement about it, but found nothing. Their lecture snippet presented by Atomic was extremely short and attempted to unpack a very complex topic.
“I don't love the idea of my lectures being taken out of the context of my overall course, and of the readings for that module, and then just presented as saying something,” they told me. “It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true... Or they're gonna think, that's obviously fucking stupid, this ‘expert’ must be dumb. But I could have been presenting a foil!” The clips are so short, it's impossible in some cases to discern context at all.
That lecturer told me the idea of their work being chopped up and used in this way was less a matter of concern for their ownership of the material, and more distressing that someone might come away from these modules with half-baked or wrong conclusions about the topics at hand. “All of the complexity of the topic is being flattened, as though it's really simple,” they said of the snippet Atomic made of their lecture. When they assign this topic to students, it comes with dozens of pages of peer reviewed academic papers, they said. Atomic provides none of that. The module Atomic produced in my test provided zero source links, zero outside readings for further study, no specific citations for where it was getting this information whatsoever, and no mention of who was even in the videos it presented, unless a Zoom name or other name card was visible in the videos.
“I would really like to know, how did this particular thing happen? How did this actually end up on the asu.edu website?” Hanlon said. “It is such a clunky thing. It is so far removed from what I think the typical educational experience at ASU is. Who decided this would represent us?”
ASU Atomic, the ASU president’s office, and media relations did not immediately respond to my requests for comment, but I’ll update if I hear back.
Howard County, Maryland, 1920. "Herald tour to Annapolis (Ellicott City railroad station)." One in a series of photos documenting the Washington Herald's"pathfinding tour" by car from one capital to another, back when inter-city automobile travel, still something of a novelty, could be a real adventure. 4x5 inch glass negative, National Photo Company. View full size.