Views expressed in opinion columns are the author’s own.

YouTube, one of the most popular platforms on the internet, recently implemented new policies targeting hate speech. Under the new guidelines, YouTube will remove videos that allege one group’s superiority or deny the existence of well-documented violent events, such as those advocating white supremacist ideologies or Holocaust denial.

While these new policies highlight the company’s commitment to creating a progressive space for creators and audiences alike, the emphasis placed on hate speech is a bit misdirected. Videos that promote blatantly violent or extremist views — such as the murder of a specific group of people — are repulsive but unlikely to appeal to the general public.

YouTube should dedicate more resources to censoring videos that promote conspiracy theories and inundate its viewers with false information on sensitive topics. These types of videos arguably inflict the most harm by leading susceptible audiences to develop potentially dangerous beliefs.

A few months ago, YouTube introduced a primitive fact-checking feature intended to prevent the spread of misinformation. When users search for videos related to topics susceptible to misinformation, YouTube displays a warning notification. A text box appears, which informs users that the topic is surrounded with false information and briefly fact-checks commonly made false claims related to the search. At the time of its introduction, the feature was only available to some users in India, but the company plans to expand the program throughout the year.

However, based on its initial design, it’s difficult to fathom what this tool could possibly achieve. By creating a preliminary warning instead of explicitly labeling certain videos, YouTube is effectively still leaving viewers to their own discretion.

Being aware of the existence of fake news is not the problem. The term has become ubiquitous in recent years, — partly due to President Trump’s rhetoric — and in one study, 50 percent of Americans surveyed said made-up news and information is a “very big problem in the country today.” For context, only 46 percent said the same about climate change.

The real issue arises when viewers must navigate the seemingly endless stream of media at their disposal and differentiate between what’s true and what’s false. In recent years, it’s become apparent that the general public is still incapable of consistently identifying fake news.

In the final three months of the 2016 presidential campaign, fake stories such as the “Pizzagate” conspiracy theory generated more engagement on Facebook than real news stories. According to an Ipsos Poll for Buzzfeed News, American adults believed fake news headlines about 75 percent of the time.

While these incidents may have just been a side effect of the 2016 presidential election — during which personal attacks were often center stage — it’s difficult to deny that fake news is continuing to fool many people. Digital technology has advanced to the point that a cursory analysis is insufficient in distinguishing between real and fake.

For example, at the end of May, a video of House Speaker Nancy Pelosi appearing to drunkenly slur her words circulated the internet. The video was found to be doctored — slowed down and pitched up to emulate a drunken voice. The altered video’s disturbing plausibility shows just how convincing fake news has become, and it seemed that many viewers initially fell for the ruse.

Other companies have gone further than posting on YouTube to address misinformation. Shortly after the 2016 election, Facebook launched a partnership with reputable organizations such as the Associated Press, FactCheck.org and PolitiFact to counter the spread of fake news. The program has had some success, and criticism has largely centered on disputes over Facebook’s community standards and concerns that it could be a publicity stunt. According to PolitiFact, the future feasibility of the program is promising as long as Facebook cooperates.

In an era when information is overwhelmingly accessible, it’s nearly impossible for internet users to independently fact-check everything. As a result, it’s the responsibility of media corporations to lend a hand in regulating and verifying their content — a video proposing a seemingly plausible hypothesis linking vaccines and autism can easily escalate and result in the death of a child. YouTube needs to adopt a more rigorous fact-checking approach to ensure the quality of its content and minimize the potential harm that can result from misinformation.

Kevin Hu is a rising sophomore physiology and neurobiology major. He can be reached at kevxhu@gmail.com.