Artificial Intelligence: the Upward Downfall | The Odyssey Online
Start writing a post
Featured

Artificial Intelligence: the Upward Downfall

We are walking a dangerous road with AI

5944
Artificial Intelligence: the Upward Downfall

“Alexa, I want the truth.”

The AI era is has arrived.

For those who don’t know who Alexa is, she is part of Amazon’s Echo, a hands-free speaker you control with your voice and is getting smarter every day. Some will tell you this marks the onset of an AI [artificial intelligence] led revolution. The demand for engineers and coders has reached an all time high as companies look for new ways to innovate in the realm of artificial intelligence.

In 1965, Gordan Moore introduced a hypothesis that would set the tempo for our modern digital revolution. From careful observation of a developing trend, Moore theorized that computational power would increase dramatically, while simultaneously experiencing a decrease in relative cost, at an exponential rate. In other words, Moore’s law claims that the overall processing power of computers would double every two years. His observation became the golden rule for the tech industry, and a starting point for innovation within the applied sciences. For fifty years, the industry followed his rule, however, the tables have turned in the past five years or so. Now we are starting to witness change in the landscape of the digital environment.

Technological giants such as Amazon, Facebook, Google, Microsoft, etc. are the forerunners of a cyber revolution marked by rapid technological growth. For instance, just last week, Google opened up about its Tenor Processing Unit (or TPU), the server chip they use in-house to perform AI computing workloads more efficiently advancing machine-learning capability by the power of three generations. This is roughly equivalent to fast forwarding technology roughly 7 years into the future or Moore’s law by three generations.

Further, we find ourselves in the age of self-driving cars. Issues during test runs of these vehicles have drawn attention to the need for self-correction mechanisms. In 2015, Google’s driverless car ran into a surprising safety predicament, humans. Human error on the road is tough to deal with when the car has been programmed to be a stickler for the rules. For instance, in 2009, the car was not able to make it through a four-way stop, as its sensors waited for other cars to stop completely before let it go through the intersection. However, cars typically failed to come to a complete stop, immobilizing the Google vehicle. Some have called the car ‘too safe’, a case accentuating the incompatibility between humans and machines.

Thus, these situations introduced an inherent need for self-correction mechanisms. Driverless cars need rely on feedback loops for self-correction, adapting to road conditions and learning the unique behavior of drivers around it. A video released by MIT earlier this year shows how self-correction technology is coming to life. An industrial robot picks up either spools of wire or cans of spray paint and drops the item into the corresponding bucket. The robot pauses temporarily, before accidently making the wrong distinction, merely to self-correct and drop the item in the appropriate container. An observer wearing an EEG cap, who just notices that something is off, triggers the corrections.

The EEG cap measures forty-eight different signals from the human brain, many of which are difficult to interpret and are very noisy. However, one signal, known as ‘error potential’, is relatively easy to detect and emits a strong reaction in the brain when the user notices that something is wrong. The incorporation of error potential is quite different than the usual paradigm used today, i.e. asking the human to program the machine in the machine’s own language. What we see here are programmers getting robots to adhere to human language rather than having humans conform to the robot’s language. A robot is doing some sort of basis task, under human supervision that is able to override the task without having to code for a correction or stop the machine through physical means.

An exciting period it may be but what is hidden behind all the technological buzz is a hefty white elephant that is publicly neglected. When machines are able to respond to us through a feedback loop of self-correction rather than us responding to them, we are giving up our control over them. This is what feeds the fears of artificial intelligence experts Elon Musk, Steven Hawking, and Bill Gates who have voiced concerns about artificial intelligence technology.

In 2015, the three experts signed an open letter on artificial intelligence calling on the research of the societal impacts of AI. While the society can reap great rewards from the exploitation of AI, it must be careful to avoid potential downfalls, such as creating something unmanageable. The letter titled Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letterdetails research priorities for AI and documents its inherent vulnerabilities.

By 2014, physicist Steven Hawking and magnate Elon Musk had publically voiced the opinion that supernormal artificial intelligence may provide innumerable benefits but could elicit the demise of the human race if exercised irresponsibly. Both Hawking and Musk have a seat on the scientific advisory board for the Future of Life Institute, an organization committed to mitigating existential threats facing humanity. The institute prepared a letter to the greater AI Reseach community where it circulated the scientific locality in early 2015 and was soon after released to the public.

The purpose of the letter was to identify the positive and negative impacts of AI research and development. The challenges that arise are separated into verification [“Did I build the system right?”], validation [“Did I build the right system?”], security, and control [“I built the wrong system, can I fix it?”]. The concerns implicated in the advancement of AI technology can also be classified in terms of short-term and long-term.

Short-term concerns are assigned to autonomous vehicles, including civilian drones and driverless cars. For instance, during an emergency, self-driving cars may have to decide between a small chance of a serious accident and a large risk of a small accident. Other concerns relate to lethal intelligent autonomous weapons, and extend to privacy concerns as AI becomes increasingly able to interpret large surveillance data. Another hot topic of discussion is how to best manage the economic impact of jobs displaced by AI.

Long-term concerns echo the word’s of Microsoft’s research director, Eric Horvitz and are may be referred to as the ‘control problem’. It addresses situations that deal with the possibility of the emergence of hazardous superintelligence and the occurrence of an ‘intelligence explosion’.

I know what you’re thinking; that this sounds like something out of a Star Wars movie. That it is something that is just hypotheticals and has as much chance of happening as aliens taking over or the earth crashing into the sun. Sure, it may just be hypotheticals, but let’s break it down a little further to get a better understanding of what all this Sci-Fi sounding nonsense really means.

It all boils down to what is known as the technological singularity. The technological singularity represents the hypothesis that the creation of artificial superintelligence suddenly set off runaway technological growth, resulting in inexplicable changes within human civilization. According to this hypothesis (and Wikipedia), an upgradable technological agent, such as a computer running software (based in artificial general intelligence) would enter a runaway reaction of self improvement cycles, with each new and more intelligent generation appearing more and more rapidly. The outcome is an intelligence explosion and results in a powerful superintelligence that would surpass all human intelligence.

It’s primarily manifested in two ways. The first is superintelligence, which is defined as an hypothetical agent that has an intellect that far surpasses that of the brightest and most gifted human minds in practically every field from scientific creativity, general wisdom and social skills. The second is an intelligence explosion, which is a function of superintelligence. An intelligence explosion is the possible outcome of humanity building artificial general intelligence (AGI), which is capable of recursive self-improvement that leads to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown. Eventually, the recursion of self-correction cycles would spawn a mechanical intelligence that is better at developing its own internal functions. It, thus, can rewrite or self-modify itself by changing its own coding instructions.

This sort of self-recurring algorithms is grouped under the term ‘machine learning’, which we discussed earlier with Google’s TPU server chip. Since is represents a fast-forward in technology by three generations we can also classify machinery as rapid technological advancement. It is this technology that means the criteria for an intelligence explosion paving the way for inadvertent emanation of artificial superintelligence. We are walking a dangerous road with AI and it is important to understand the risks we are involving.

So, go ahead, if you happen to have an Amazon Echo, tell Alexa you want the truth and see what she has to say. Keep in mind, she’s smarter than she looks and she’s getting smarter everyday.

“Alexa, I want the truth.”

“You can’t handle the truth."

Report this Content
Featured

15 Mind-Bending Riddles

Hopefully they will make you laugh.

188854
 Ilistrated image of the planet and images of questions
StableDiffusion

I've been super busy lately with school work, studying, etc. Besides the fact that I do nothing but AP chemistry and AP economics, I constantly think of stupid questions that are almost impossible to answer. So, maybe you could answer them for me, and if not then we can both wonder what the answers to these 15 questions could be.

Keep Reading...Show less
Entertainment

Most Epic Aurora Borealis Photos: October 2024

As if May wasn't enough, a truly spectacular Northern Lights show lit up the sky on Oct. 10, 2024

14054
stunning aurora borealis display over a forest of trees and lake
StableDiffusion

From sea to shining sea, the United States was uniquely positioned for an incredible Aurora Borealis display on Thursday, Oct. 10, 2024, going into Friday, Oct. 11.

It was the second time this year after an historic geomagnetic storm in May 2024. Those Northern Lights were visible in Europe and North America, just like this latest rendition.

Keep Reading...Show less
 silhouette of a woman on the beach at sunrise
StableDiffusion

Content warning: This article contains descriptions of suicide/suicidal thoughts.

When you are feeling down, please know that there are many reasons to keep living.

Keep Reading...Show less
Relationships

Power of Love Letters

I don't think I say it enough...

457218
Illistrated image of a letter with 2 red hearts
StableDiffusion

To My Loving Boyfriend,

  • Thank you for all that you do for me
  • Thank you for working through disagreements with me
  • Thank you for always supporting me
  • I appreciate you more than words can express
  • You have helped me grow and become a better person
  • I can't wait to see where life takes us next
  • I promise to cherish every moment with you
  • Thank you for being my best friend and confidante
  • I love you and everything you do

To start off, here's something I don't say nearly enough: thank you. Thank you, thank you, thank you from the bottom of my heart. You do so much for me that I can't even put into words how much I appreciate everything you do - and have done - for me over the course of our relationship so far. While every couple has their fair share of tiffs and disagreements, thank you for getting through all of them with me and making us a better couple at the other end. With any argument, we don't just throw in the towel and say we're done, but we work towards a solution that puts us in a greater place each day. Thank you for always working with me and never giving up on us.

Keep Reading...Show less
Lifestyle

11 Signs You Grew Up In Hauppauge, NY

Because no one ever really leaves.

26259
Map of Hauppauge, New York
Google

Ah, yes, good old Hauppauge. We are that town in the dead center of Long Island that barely anyone knows how to pronounce unless they're from the town itself or live in a nearby area. Hauppauge is home to people of all kinds. We always have new families joining the community but honestly, the majority of the town is filled with people who never leave (high school alumni) and elders who have raised their kids here. Around the town, there are some just some landmarks and places that only the people of Hauppauge will ever understand the importance or even the annoyance of.

Keep Reading...Show less

Subscribe to Our Newsletter

Facebook Comments