Zoom is a part of our daily lives, but is the company acting in our best interest?
Many classes this semester are taking place over Zoom. (Julia Nikhinson/The Diamondback)
Views expressed in opinion columns are the author’s own.
Now that social distancing forces us to rely on the web to work, play and keep ourselves sane, video conferencing applications have become one of the pillars of our new lives. In particular, Zoom has taken off, now having over 20 times the amount of users it had in December. Its new place in so many lives requires a certain amount of public trust, but Zoom has a worrying track record.
We’re already seeing new forms of exploits and harassment over the app, including the new epidemic of Zoombombing that has prompted FBI attention. But Zoombombing feels like an evil inevitability of Zoom’s neglectful security infrastructure. In the past, there have been major vulnerabilities risking users’ data exacerbated by the app’s own “malware-like” practices, such as installing a hidden web server that made the app difficult to fully uninstall and lying about using end-to-end encryption. Foundationally, Zoom seems to have a weak commitment to security, and while it has been quick to fix these issues once users have raised concerns, it is starting from an unexpectedly insecure baseline.
Zoom also shares user information with advertisers and other third parties. This was despite not disclosing it anywhere — and Zoom initially claimed not to have known it was happening. While this is standard across many web based services, it’s frustrating that to use this service for free — as University of Maryland students do through the school’s license — our data must be sold.
This lack of transparency and accountability is concerning. Why should it take outside inquiryto ensure privacy and code compliance? The burden of fortifying this app feels like it’s coming from the user community more than the developer.
Additionally, educational technology has a history of creeping into copware, adding unnecessary surveillance features so administrators can keep an eye on their audience. Zoom is no different. User data given to administrators includes how and where users are attending meetings, the contents of their recorded calls and rankings of people who’ve spent the most time in meetings.
Until last week, Zoom also included a feature that notified the host when people click out of the meeting window for more than 30 seconds. The inherent assumption underlying the use of all these analytics is that if you’re not pulling up the meeting window, you’re slacking — but why should you be criticized for taking notes in a separate application, multitasking, taking time off or simply deeming a meeting nonessential?
While controversy led to the feature finally being pulled, there’s still a culture of surveillance associated with this shift to video conferencing. Memes about setting your Zoom background to a loop of yourself are an unfortunate reflection of our rigid cultural image of productivity. In this time of incredible stress, it can feel like there’s a constant eye on us. Instead of being able to have a healthier workday punctuated by personal breaks, we may be even more confined than in the classroom or office as we perform our presence for the webcam. The assumption that time onsite, time in a meeting translates to commitment to your work has always been harmful, and it’s now embedded — literally or metaphorically — in these virtual applications.
In this isolating time, applications such as Zoom are essential tools in maintaining the social connections that drive our lives. But if software is going to integrate with our day to day, it needs to have higher standards for privacy and security out the gate. Zoom is likely not the only dubious actor we have installed on our devices, but we need to learn from the visibility of its mistakes the types of threats that we carry with us.
Hadron Chaudhary, opinion editor, is a senior English and geology major. They can be reached at firstname.lastname@example.org.