SAN FRANCISCO — The U.S. election and its dramatic aftermath have elevated the debate about how to deal with online mis- and disinformation, lies and extremism.

We saw social media companies permanently kick the president, some of his allies and conspiracy groups off their platforms for election misinformation, raising eyebrows around the world and leading to accusations that they’re being robbed of their First Amendment rights. At the same time, people used social media to communicate plans to commit violence at the Capitol, drawing complaints that platforms don’t do enough to censor extremism.

This has exacerbated calls by politicians and others to regulate online speech by imposing rules on Facebook, Twitter and other social media platforms. Lawmakers are backing various wrongheaded proposals for this.

One would change the law to hold tech companies legally liable for the speech they host, by amending Section 230 of the Communications Decency Act — the thought being that platforms will remove harmful speech to avoid multiple lawsuits. Another would give state legislatures power to regulate internet speech.

Last but not least, now-former President Donald Trump issued in May an executive order that would essentially insert the federal government into private internet speech, letting government agencies adjudicate platforms’ decisions to remove a post or kick someone off.

The Biden administration can rescind the order — but so far it has not.

It is important to note that the law as it currently exists gives platforms both the right to curate their content as they see fit (thanks to the First Amendment) and protects them from liability for the choices they make about what to remove or leave up. Without these protections, it is unlikely that we would have seen the growth of these platforms in the first place, nor are we likely to see further flourishing of competition in the space.

The purported remedies under consideration by lawmakers are highly and dangerously flawed and flout First Amendment speech protections. They would foster state censorship antithetical to democracy.

Big tech companies would have more control over online speech than they already have because they can afford the legal fights that will scare off new entrants to the market. What’s more, they would push legal, protected speech offline, and silence the voices of marginalized and less powerful people who rely on the internet to speak out — a diverse set of people that includes activists, journalists, LGBTQ individuals and many more.

Instead, users should have more power to control what they see on their feeds. They should be able to move freely with their data from one platform to another when they don’t like what they see.

There should be more competition and more choice of platforms so users can seek out the one that works for them. Mergers and acquisitions among social media companies should be more closely scrutinized, and our antitrust laws better enforced to foster competition. Instead of having one giant platform gobbling up its competitors, as Facebook did with Instagram and WhatsApp, we need multiple, diverse platforms for people to choose from.

Facebook, Twitter and Google have way too much control over public discourse and do a mostly horrendous job at moderating speech on their platforms. The decisions they make to take down posts or close accounts are inconsistent and vague, and lack transparency.

That needs to change. Platforms should adopt standards like the Santa Clara Principles on Transparency and Accountability in Content Moderation (developed by civil society and endorsed by numerous companies), which frame content moderation practices around human rights considerations, including the right to appeal take down decisions and have humans, not algorithms, review removals.

Tech companies have a First Amendment right to edit and curate the content on their platforms, free of government interference. The government cannot force sites to display or promote speech they don’t want to display or remove speech they don’t want to remove.

We support this right. The government shouldn’t have the power to dictate what people can or cannot say online.

But until platforms embrace fairness, consistency and transparency in their editing practices, give users more power over their social media accounts and embrace inter-operability so users won’t lose data if they decide to switch platforms, and until policymakers find ways to foster competition, we will continue to see misguided calls for the government to step in and regulate online speech.

Jillian C. York is director of International Freedom of Expression at the Electronic Frontier Foundation and author of “Silicon Values: The Future of Free Speech Under Surveillance Capitalism.” Karen Gullo is an analyst and senior media relations specialist at EFF. They wrote this for InsideSources.com.

Johnson Newspapers 7.1