Views expressed in opinion columns are the author’s own.

In the latest of a long line of attacks against Twitter and efforts to dispute claims of election fraud, President Donald Trump called for the immediate termination of Section 230. This piece of legislation, which is part of the Communications Decency Act, protects neutral platforms — such as social media companies — from being liable for the content published on their platforms by third parties. While this was a relatively obscure provision, it has recently become the target of partisan attacks. 

Attempts to repeal or reform this provision from both the right and left are rooted in the concern that social media platforms have gained too much power in shaping public discourse. To address this, a growing number of politicians want to remove protections under Section 230 or adjust them to limit the broad liability shields currently protecting companies.

But reforms like these would be a misguided effort to regulate how we communicate and express ourselves on the internet. And while Section 230 is by no means perfect, repealing it would be a short-sighted and potentially disastrous attempt at solving a problem that ultimately should be out of the government’s control. 

Democratic and Republican legislators have their own distinct reasons for wanting to either repeal or reform Section 230. Among Democrats, the movement to change the law has been focused on addressing the threat of misinformation and hateful content  spread on social media. President-elect Joe Biden called for it to be revoked while on the campaign trail, accusing companies like Facebook of being responsible for the rapid proliferation of misinformation. This may seem true, but only to some extent — removing the protections of Section 230 would not be an easy fix to this problem because there’s no universal standard for what constitutes misinformation.

Multiple Republican senators, in addition to the president, have argued social media platforms unfairly censor conservatives and these platforms must have viewpoint-neutral policies if they expect legal protection. Their position is that platforms moderating the content posted by users inherently become a publisher of all the content. However, this is not how we traditionally view social media as we tend to see it as mostly open platforms with certain restrictions on what users can post. 

Those who want to see Section 230 removed as a way of creating more open platforms are faced with a “moderator’s dilemma.” With Section 230, platforms can establish appropriate rules without worrying about litigation arising as a result of a user’s posts. But without it, companies may be forced to become more vigilant in regulating content, and they may implement even more restrictions on what users can do. Concerns about overly restricted internet forums were the reason Section 230 was implemented in the first place. 

Alternatively, without Section 230 protecting them, social media platforms could adopt less restrictive policies when it comes to what third parties can post, in an effort to protect themselves from litigation and responsibility. But this means websites such as  Twitter and Instagram may no longer restrict content like spam or harassing posts.

A more sensible approach would recognize the protections Section 230 provides are incredibly important for tech companies and platform users. Social media companies need to be able to remove content they deem unfit for their platforms — without being responsible for every single post. Changes to these social media platforms’ terms of services can also be accomplished internally, with pressure from employees and advertisers to address content moderation issues. This seems like the strategy we should be taking, rather than giving the government more control over how we use the internet. 

Evan Crum is a junior government and politics and psychology major. He can be reached at ecrum42@umd.edu.