fbpx
Artificial IntelligenceScience & Tech

The Possibility of Being ‘Plugged In’

The hit 1999 film The Matrix captured the imagination of science fiction viewers when it presented a world where most of humanity had been captured by a race of machines that lived off of the body heat and electrochemical energy of humans. The majority of the human race have their minds imprisoned within an artificial reality known as The Matrix — a computer-generated dream world initially designed to be a perfect place where none suffered. In an attempt to free his mind from the system Neo, the lead protagonist, learns that most people are not ready to be unplugged. In fact, many of them are so dependent on the system that they will fight to protect it.

“What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signal interpreted by your brain.” ~ The Matrix, 1999

Living in an era dominated by technology, it is easy to draw parallels with the themes of the film and society today. Research in artificial intelligence (AI) is at the forefront of many discussions within the scientific and ethics community. With respected figures such as Dr. Stephen Hawking expressing concern about the possible dangers in developing AI, is the possibility of being “plugged in” closer to us than we believe?

AI is defined as the capability of a machine to imitate intelligent behaviour; computing machinery capable of duplicating human cognitive mental states. While science fiction often portrays AI as robots with human-like characteristics, AI can actually encompass anything from Google’s search algorithms to autonomous weapons. The long-term goal of many researchers is to advance from the narrow AI available today to create general AI (AGI). AGI would have the capabilities to outperform humans at nearly every cognitive task. While superintelligence may help us eradicate war, disease and poverty, experts express concern that this technology should not be introduced unless we learn to align the goals of the AI with our own.

The Possibility of Being 'Plugged In'

(Image Source: Forbes)

How can AI be dangerous? Whereas most researchers agree that AGI systems are unlikely to exhibit human emotions so they do not expect them to become intentionally benevolent or malevolent, there are scenarios that AGI might become a risk. For example, fully autonomous weapons used to cause mass causalities could lead to an AI war in which simply turning off the switch is no longer an option.  The second scenario is one where the AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal. Aligning AI’s with our own is a difficult task because it might not do what you wanted but literally asked for. For example, if you asked to get an AI car to take you to the emergency hospital as fast as possible, it might get you there while crossing all traffic and safety violations.

A key goal of AI safety research is to never place humanity in danger to achieve a certain goal. While we are thankfully still far from developing the technology presented in the dystopian society of The Matrix, it is important to reflect upon the complexity and computational abilities of our own intelligence organ. Delving into the circuitry of our brain may lead to clues as to how learning and higher-order cognition is encoded and reproduced with AI. This has the potential to open new doors to how we understand the brain and express complex behaviours. Hopefully, the integration of AI and neuroscience leads to advances in treatments for brain-related disorders.

Author

  • Abeir Wasim

    Hi! I'm a 4th year student at UofT pursuing a specialist in Neuroscience. Outside of school, I love videography and playing with my cat : )

Want to learn more about INKspire? Check out our organization's website.

X
Want to learn more about INKspire? Check out our organization's website.
This is default text for notification bar