Stop Big Tech From Exploiting Our Kids | Opinion

At the risk of understatement, the recent revelations by whistleblower Frances Haugen on how Facebook and Instagram view children were shocking. In her testimony, she claimed that Facebook's own internal research showed that the social media platform "has a serious negative harm on a significant portion of teenagers and younger children." These services expose children to online predators, and have increased the incidence of teenage eating disorders, body image issues and suicidal thoughts among teenagers.

The government has so far failed to hold Silicon Valley accountable for the dangers social media poses to children. But while existing penalties are proving insufficient, legal principles that already apply to other hazards can inspire a new legislative approach.

Fines have been federal agencies' go-to method of punishing Big Tech for profiting off child users. In early 2019 the Federal Trade Commission fined Musical.ly (now TikTok) $5.7 million for illegally collecting personal information from children. Later that year, the commission leveled the same charge against Google and YouTube with a $170 million penalty. To put those numbers in context, TikTok's market cap is estimated to exceed $140 billion, and Google's annual revenues exceed $180 billion. To Big Tech companies, fines are just a slap on the wrist.

Federal law, as it now stands, is of little help. Congress's primary venture into protecting kids online—the Children's Online Privacy Protection Act, passed in 1998—is primarily targeted at protecting the personal information of children (under 13) online. It requires disclaimers, limits targeted advertising and pushes websites to get parental consent before kids can sign up.

The problem is that kids can't be expected to read disclaimers (and few teens or adults do in any case). Kids still can see advertising (even if it's not targeted to them individually). And parents don't get the option to consent in part—it's all or nothing for most sites, so if a school assigns watching an online video as homework, its students will have access to almost everything else as well.

It's time for Congress to step in and update children's online protections for the 21st century.

Fances Haugen testimony
WASHINGTON, DC - OCTOBER 5: Former Facebook employee and whistleblower Frances Haugen testifies during a Senate Committee on Commerce, Science, and Transportation hearing entitled 'Protecting Kids Online: Testimony from a Facebook Whistleblower' on Capitol Hill, October 05, 2021 in Washington, DC. Haugen left Facebook in May and provided internal company documents about Facebook to journalists and others, alleging that Facebook consistently chooses profit over safety. Jabin Botsford-Pool/Getty Images

It could start by requiring Big Tech companies like Facebook, TikTok and Google to put reasonable safeguards in place to protect children if they know (or should realize) that children use their services. Such a law needn't be complicated. It could draw on a century of court-developed doctrine requiring landowners to address an "attractive nuisance" on their property.

Generally, landowners need not fix every hazardous object or condition on their land—a warning (the ubiquitous "No Trespassing" sign) will do. But when a dangerous nuisance on the land would likely attract children, the calculation changes. Some kids can't read the signs. And even those who do won't likely appreciate the danger. That is why landowners with swimming pools, trampolines and even abandoned refrigerators on their land must usually do more than post a sign—they've got a build a fence or recycle the old fridge. In other words, if a landowner has an attractive nuisance on his property, the law requires him to take reasonable measures to eliminate the danger for children.

It's clear that some in Congress consider Big Tech platforms an attractive nuisance on the virtual terrain—and should be held accountable for the harms they cause. Senator Marsha Blackburn (R-Tenn.) has argued Facebook is already targeting young children with an "addictive" product. And Senator Richard Blumenthal (D-Ct.) believes that "Facebook exploited teens using powerful algorithms that amplified their insecurities."

The attractive nuisance doctrine can compel these companies to enforce their own policies intended to protect children. For example, Facebook claims that it limits certain ads, such those promoting tobacco products, to protect children. Yet Facebook not only makes these products accessible on its platform, but also promotes their pages. If the attractive nuisance doctrine were applied to social media, a court could find that when Facebook flagrantly promotes products that clearly harm children, it is liable under the attractive nuisance doctrine because it did not place reasonable safeguards.

Congress could also place a transparency requirement on social media platforms. Tech companies often operate in extreme opacity, so shedding some light on their practices could help promote child safety online. Congress could mandate that tech companies publish their risk assessments publicly so that parents can better assess the dangers social media might pose to their children. Alternatively, Congress could require tech companies to share their full research on children's mental health and well being with independent researchers, civil society organizations and regulators to develop best practices. If a company fails to follow through with its transparency obligation, then it is liable for not placing reasonable safeguards to protect children.

Lawmakers from both parties have shown a new willingness to entrust the U.S. Department of Justice and our state attorneys general with new authority to take on Big Tech. Senator Amy Klobuchar (D-Minn.) and Senator Chuck Grassley (R-Iowa), for example, recently joined nine other senators to empower these governmental actors to rein in Big Tech companies that act anticompetitively. Let's empower law enforcement agencies to protect our children from everything we know—and don't know—about the services available online.

Joel Thayer is President of the Digital Progress Institute and an attorney based in Washington, D.C. The Digital Progress Institute is a D.C. non-profit seeking to bridge the policy divide between telecom and tech through bipartisan consensus.

The views expressed in this article are the writer's own.