Sophomore physiology and neurobiology major

On Saturday, I put on my Captain America T-shirt and finally went to go see Avengers: Age of Ultron. I was absolutely flabbergasted by the pure awesomeness of the movie, but even more so, I was surprised by how strangely plausible the plot of the movie could be — assuming you ignore the superhumans and Norse god of thunder.

While purely fictitious (hopefully this next part doesn’t ruin the movie for anyone), the movie revolves around an artificial intelligence program named Ultron. This program goes wild and tries to destroy humanity after deciding that the only way to save humans is to completely eradicate them. Much like the other popular “machines gone evil” films such as the Matrix or the upcoming Terminator sequel, Avengers: Age of Ultron examines what may happen when technology becomes a tool that humans can no longer wield. The concept of artificial intelligence is quite intriguing in that AI are essentially intelligent software programs that are capable of feeling, learning, thinking and reasoning — much like a human would. Already in our lives, we have some experience with advances toward granting computers the power of reason, such as facial-recognition technologies and certain computer science programs, where the ultimate purpose is to help solve some of today’s everyday issues.

And while AI is certainly a noble scientific pursuit, some of our world’s intellectual heroes (albeit not superhuman), such as Stephen Hawking and Bill Gates have called for increased attention to artificial intelligence limitations during the last couple of months. When asked about AI development, Hawking, one of the world’s leading astrophysicists, commented, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” By Hawking’s estimations, he imagines that “Computers will overtake humans with AI at some point within the next 100 years.”

AI or no AI, it is indisputable that these next couple of decades will be accompanied by a nearly inexorable age of technological innovations. Already,we are approaching a point with our neuroscience and neuroimaging technologies that may potentially allow us to predict certain criminal behaviors through the analysis of brainwaves. And while this certainly brings up ethical issues in regard to cognitive liberty, there does seem to be an alarming trend that improved technology often decreases individual privacy. Technology is increasingly dominating more and more portions of our lives. For the sake of convenient communication through social media and better security, we already give up some of our individual rights. But can we risk successfully developing artificial intelligence? What if we are unable to control these computers’ development and they successfully become even more intelligent via runaway feedback loops. Can our human growth match their software and mechanical growth?

Overall, AI and rampant technological growth around the world is becoming more pressing every day, but hey, on the bright side, at least we don’t have to worry about a Chitauri invasion, am I right?