If the New York Times or the Las Vegas Review-Journal or Scientific American publishes a false statement that hurts someone's reputation, that person can sue the publication. If such defamation appears on Facebook or Twitter, however, they can't. The reason: Section 230 of the federal Communications Decency Act. Signed into law in 1996, it states that online platforms—a category that includes enormously rich and powerful tech companies such as Facebook and Google, as well as smaller and less influential blog networks, forums and social media start-ups—are not considered “publishers.” You can sue the person who created the video or post or tweet but not the company that hosted it.

The law was designed to protect Internet companies, many in their infancy at the time, from legal actions that could have held back their ability to innovate and grow. But today immunity from consequences has also allowed hate speech, harassment and misinformation to flourish. In a belated effort to deal with those problems, the biggest platforms now attempt to flag or ban what they feel are objectionable content generated by users. This often infuriates those whose posts or tweets have been singled out, who complain their freedom of speech is being suppressed.

Last October the U.S. Senate Committee on Commerce, Science and Transportation held a hearing about how to modify the law. But high-handed changes often don't consider all consequences—and that can lead to real danger. Prime examples are 2018's Fight Online Sex Trafficking and Stop Enabling Sex Traffickers Acts (FOSTA-SESTA). These laws removed Section 230's protections for content that advertises prostitution, in an effort to stop victims of sex trafficking from being bought and sold online. The idea was to use potential legal liability to force platforms to remove content that encouraged such crimes. In practice, sites that lacked the resources to patrol users' activity ended up banning legitimate pages where illegal content had appeared in the past, deleting large swaths of material or shutting down entirely. The purge kicked consensual sex workers out of online spaces they had used to find clients and assess any risk of harm before agreeing to in-person meetings. Without the ability to screen clients online, prostitution can be extremely dangerous; one 2017 paper, updated in 2019, suggests that in cities where the online classified ad service Craigslist allowed erotic listings, the overall female homicide rate dropped by 10 to 17 percent. Although other researchers have contested the link between online advertising and greater safety, consensual sex workers have reported negative effects as a result of FOSTA-SESTA.

Joe Biden and Donald Trump have both called for outright repeal of Section 230. Others in Congress are proposing less radical changes, offering bills such as the Platform Accountability and Consumer Transparency (PACT) Act, which would require social media companies to disclose their moderation practices to show they are not arbitrary and to promptly take down content that a court deems illegal. The stricter takedown standard would favor wealthy companies such as Facebook, which can afford to employ armies of moderators and lawyers, and disfavor start-ups—just the problem Section 230 was meant to prevent. In addition, as they did in response to the laws intended to curtail sex trafficking, smaller platforms are likely to increase overly broad censorship of users to avoid legal challenges.

As digital-rights group the Electronic Frontier Foundation (EFF) points out, hobbling Section 230 could have a chilling effect on free speech online and make it much more difficult for new competitors to challenge the dominance of big tech. The EFF is not the only voice picking holes in legislation like the PACT Act: academics and other technology advocacy groups have offered measured critiques of the bill and proposed their own solutions for strategically modifying Section 230. One of their suggestions is to ensure the bill would apply only to platforms that host users directly—not to the companies providing background support for functions such as Internet access and payment processing—to protect the larger infrastructure of the Internet from legal liability. Another idea is to improve users' ability to flag problematic content by working with legal authorities to develop a standardized reporting process that any platform could apply.

Input from experts like these—not just from billionaire CEOs such as Facebook's Mark Zuckerberg and Twitter's Jack Dorsey, the usual suspects when hearings are convened on Capitol Hill—is crucial to craft nuanced legislation that will give online platforms incentives to protect users from harassment and to suppress malicious content without unduly compromising free speech. If that happens, we might get Internet regulation right.