Views expressed in opinion columns are the author’s own.

A few weeks ago, students in Britain received their scores from their A-levels. Since in-person testing was canceled, the scores — which are extremely important in the college admissions process — were predicted using a computer algorithm accounting for factors such as the historic performance of the school and teacher expectations. 

 

And, for a staggering 40 percent of students, anticipation was ultimately met with disappointment and horror as scores came back much lower than expected given past academic performance. The importance of these exams left some students’ futures hanging in the balance, their college admission offers revoked. The system responsible for these scores has been criticized as classist and discriminatory against disadvantaged students.

 

How did the British government screw up so badly? Given the situation, it seems like nothing short of egregious negligence bordering on malice could’ve led to this disaster. The truth, however, is a bit more nuanced than that. The British government committed a fatal (but somewhat understandable) mistake: It placed too much faith in the capabilities of new technology it didn’t fully understand, and it put too much stock into the idea that technology can solve everything.

 

Even if we aren’t aware of it, we’ve all internalized this mantra, simply by existing in the digital age. We’re constantly exposed to innovations in the classroom and workplace — and mass media isn’t shy when it comes to praising the achievements of our ever-evolving technological landscape. 

 

For now, the largest obstacle preventing automation from completely taking over our world is the black box problem. As the name would suggest, a black box is something  you cannot see inside of. In the context of computing, this means that the inner components and processes leading to the output are not fully understood. The partially unknown nature of a black box — especially when artificial intelligence is involved — should instill instinctual fear and wariness. This is a good thing as, ideally, it forces us to exercise at least some caution. 

 

However, like with the A-levels results, this isn’t always enough to deter individuals or institutions from over-enthusiastically embracing new technologies. While technological innovations have vastly improved our quality of life, it’s important to be cognizant of the potential dangers of blindly trusting automated processes. 

 

Automation is not only used to improve the efficiency and accuracy of clear-cut, objective processes — everything from patient diagnoses to criminal sentencing is slowly starting to rely on computerized aid. Proponents tend to argue that, by using a machine or algorithm, we can eliminate bias and reach a more objective conclusion. This notion is a myth, or at least in terms of these processes. In particular, automated processes used for criminal sentencing have been shown to discriminate against Black defendants.  

 

The impulse to digitize everything can drastically transform modes of governance across the world. Over the last few years, Britain has evolved into a digital welfare state, in which social services are increasingly automated. The most notable example is Universal Credit, a British digital welfare system that attempted to facilitate the delivery of social security benefits. In reality, the system ironically made it more difficult to claim benefits due to issues such as a general lack of digital literacy among beneficiaries.

 

Our society is obsessed with the idea of a technological fix to most, if not all, of our problems. While advances in technology have and continue to benefit our world, it’s crucial that we critically assess both the capabilities and limitations of novel technologies. If we fail to do so, a technological dystopia may not be so far off. 

 

Kevin Hu is a rising junior physiology and neurobiology major. He can be reached at kevxhu@gmail.com.