Artificial intelligence has been pondered and feared by man for decades and is now so advanced as to drive unmanned vehicles, converse with people and make decisions previously thought to be impossible by anything but a human.
This kind of technology may seem fun and exciting now, but what will be the outcome of further advancement in the future?
At its most rudimentary level, artificial intelligence, or A.I., is categorized as something manmade that can make decisions without the instruction of a user. This can be seen in daily gadgets, such as a phone with spell suggestions or a car with verbally-commanded GPS.
According to CHS computer science teacher Tom Clifford, he habitually uses different forms of A.I. in every class. Included in these are the Scratch games and the various coding he has his AP Computer Science class create. With Scratch, students can input coding that make objects react to different, uncontrollable obstacles. The object’s ability to react without any further input is what makes it a form of A.I.
In Clifford’s class, when a student is able to program an object to react in certain ways without being told to do so, that is A.I; this is what makes up a Scratch computer game.
Many think that A.I. is growing at an alarming rate and that soon humans could be replaced by robots. Among these believers is English theoretical physicist Stephen Hawking, who is deeply concerned that robots are going to be able to outsmart humanity in the near future.
“The development of full artificial intelligence could spell the end of the human race,” Hawking said during a 2014 BBC interview.
Professor Hawking says that artificial intelligence has been useful so far, but huge consequences may fall upon mankind if it continues to advance.
This does not mean that the world as we know it will turn into the plot of “The Terminator,” but this science-fictional issue may present itself in some form far later in the future.
Along these same lines, David Steinberg, a local software developer, has the view that A.I. is beneficial now and will be almost no threat later on. When this form of intelligence takes on physical roles that involve everyday people, such as the self-driving car, Steinberg says that it will have immediate improvements in daily life.
“Self-driving cars will change what we think of personal transportation and, considering how often folks text and drive, can’t come soon enough,” Steinberg regards of the technology.
Also on board with self-driving cars is CUSD webmaster Colin Matheson, as long as, he says, the cars have an override feature for someone to take control whenever needed. As far as technology in the future, Matheson speculates what may become of modern facial recognition technology.
“Even right now I am sure it is possible to build a quadcopter drone with weapons and facial recognition software and program it to kill particular people without the need for a human to give the OK,” Matheson says about the dangers of facial recognition in the military.
Though self-flying drones may appear to be a viable innovation for the military, the ability of a non-human to kill a man is already an idea considered immoral and senseless.
With further development of cars and weapons like these, along with other smaller innovations such as Siri, the push into reliance on artificial intelligence could very well be for the best, but it could also go in the opposite direction.