Views expressed in opinion columns are the author’s own.

Shortly after taking the helm at Twitter, Elon Musk began massive layoffs, including employees responsible for U.S. elections and content moderation. But his timing could not have been worse, casting immense doubt on the social media platform’s ability to suppress content created by malicious actors.

During last week’s midterm elections, Twitter faced a proliferation of election-related misinformation — and delivered an unacceptably inadequate response.

Ample content moderation is critical to impede election interference and manipulation, both foreign and domestic. And maintaining that moderation appears wholly dependent upon mitigating the negative effects of Section 230 of the Communications Decency Act, a law that protects technology companies from being held liable for illegal user-generated content, which allows such negligence.

While desires for immediate reform exist on both sides of the aisle, the federal government should start by mandating that companies in charge of social media platforms disclose certain data with researchers — or face losing legal protections — to reduce room for carelessness.

Repealing Section 230 outright could create a host of problems, and many reasonable critiques of previously proposed reforms exist. Instead, more controlled changes to federal regulations would ensure that companies cannot risk disastrous outcomes in a hastily constructed attempt to maximize profits. Exploring new legislation is also necessary as the Supreme Court considers challenges that could upend Section 230 as we know it.

A content moderation team with limited capacity has an inferior ability to locate and label content that falsifies or misrepresents the stances or character of candidates — an area where Twitter is already struggling. Fifteen people, the number remaining with access to the platform’s moderation tools, cannot reasonably be expected to take on the hundreds of millions of Tweets sent every day.

Intelligence agencies and technology companies have detected Russian efforts both in 2020 and as recently as this month that closely resemble the information warfare driven through social media platforms including Twitter in the 2016 presidential election.

And election deniers are increasingly using social media platforms to spread conspiracy theories and sow doubt in legitimate electoral processes to push narratives of stolen elections.

Rogue campaigns have also been known to spread incorrect information about voting via means that are harder to track, such as robocalling and text messages. Musk continues to run the serious risk of allowing Twitter to become another compromised sphere of influence.

The imperative Twitter has to ensure rigorous content moderation extends beyond electoral integrity — content moderation is widely used to prevent and take down hate speech and other illegal or disturbing content.

Following Election Day, Twitter also released a paid verification service, which was paused a day later because it became harder for users to discern the authenticity of the sources they rely upon. If Twitter resumes this service without earnestly re-evaluating its approach, the same issue will plague political information, when candidates, elected officials and journalists are impersonated in the critical moments of elections.

While Musk maintains that widely available verification brings “power to the people,” it will hurt the average user more than any influential ones.

Verification offers few tangible benefits for the bearer. It does, however, help unverified users navigate the volley of information they see while using Twitter by providing assurance that accounts truly represent the entity that they claim to.

Taking away such certainty will cause profound damage to the electoral process. For example, a user posing as a news organization could post false election calls, which could dampen turnout later in the day if voters believe the outcome of a race is set.

What Twitter’s leadership must grapple with is that open communication is not the service social media platforms provide — curated communication is.

Creating a social media platform where users can exchange ideas and express their sentiments on those ideas is an elementary task. Making sure that platform is authentic and cannot be abused constitutes the true challenge.

Both technology companies and policymakers share a mutual responsibility in facilitating free speech that is truly free — one that protects the uninhibited exercise of the right to vote and that keeps all users safe. Data transparency is the first step in enabling legislators to tackle the unintended consequences of Section 230.

Dhruvak Mirani is a freshman computer science and government and politics major. Mirani can be reached at .