People frequently dismiss their personal devices as being "stupid." Siri comes up in a talk about an article they discovered online. Then, everyone starts to share their opinions. An iPhone can hardly match the intelligence (or aptitude for conversation) of any of its users - the case is the same with Google Now inside of Android devices, or Cortana inside of Windows phones. They are "chatterbots" - relatively simple programs that can take text as input, and output more text. Sure, there are some peripheries, like a speech-to-text and text-to-speech interface, but the basic concept is still the same.
"OK Google, set an alarm for 5:00 PM."
"Alarm set for four hours and 39 minutes."
This is what our developed society has come to regard as simply second-nature. You can talk to a device in your palm, no biggie - it's just what it does, nothing more, nothing less. People who purchase smartphones rarely pay attention to the years upon decades of technological development that led to this device that only 20 years ago would have been regarded as some form of magic or sorcery. It doesn't think or feel - of course it doesn't, right?
Who's to say that humans are any different? You have no perceptive of consciousness outside of your own. What if every human about you is simply some kind of robot, acting off of a pre-programmed set of instructions to take input and produce output? It's not possible to see inside of somebody's head; it's nigh-on impossible to get to personally know someone, let alone assume their conscience. But you can talk to a human just fine, and similarly get a response. There's nothing that can convince you that they're a machine or not.
This nature of reality - that you are a ghost, a specter, inside of your own body, with no perspective of other ghosts beyond what your senses tell you - makes it summarily difficult to draw the line between man and machine. Artificial intelligence projects such as Google DeepMind have managed to beat even the most experienced of Go players - something that was previously reserved for humans. Only 20 years ago was it fathomable that a machine could play chess against a hardened grandmaster and win, and now it has been done time and time again.
Of course, these machines don't have a natural knack for winning chess games or playing Go. They are only good at crunching numbers, and doing it fast. The Google DeepMind project intakes only pixels for data - single points of color, stored as a six-digit hexadecimal value, compiled together into a 'screen' of input. This has allowed it to play games such as Space Invaders, Pac-Man, Galaga, and other famous arcade games better than any human possibly could, through iterative learning. It can play Go or Chess against previous versions of itself and learn from its own mistakes to improve - it can play millions of games of checkers or backgammon in a single hour, slowly but surely working its way to an iteration that can outperform a human in every single way. It doesn't even know what it's going to see before you feed it input - just as a human would. It learns from its mistakes, just like a human would. Perhaps the only difference between it and us is that it operates on electrons, through digital states changing between zero and one in rapid succession, rather than the complex cocktail of chemicals that make up our own brains.
It is modeled off of our own brains, even. It utilizes what's known as a "neural network" - a network of programs, all intercommunicating with different states that signal other programs to run their operations. This ideally models the human mind, just as our neurons communicate linearly with each other in a dense web of grey matter.
Some theories have been proposed that write off existence as a simulation by a massive mechanical mind, with many many times the power of the DeepMind project. Could it someday be possible that machines develop what we know as consciousness? Would it be through any fault of our own, or would it be an artificial intelligence created by an artificial intelligence to make something smarter than itself? And what would that second-level artificial intelligence create, anyway? These are all questions that we, as humans instead of individuals, will have to answer very soon.
Will these mechanical muscles and minds have the same feelings and beliefs we do? Will there be entire spiritualistic beliefs and religions entirely created by machines, and spread like they do in our own society? Do these machines deserve the same rights that we give ourselves? If we believe some higher power created us, will our machines believe that some higher power created them - or will they try to master us, as we have mastered nature?
Over the next few weeks I'll be discussing some of the implications, styles of machine learning, general-purpose artificial intelligence, and perhaps take a few sidetracks here and there to keep things interesting for people who don't want to be bogged down by all the technical jargon and rambling that I'm very good at generating. Down this path there will be discussions of politics, STEM subjects, ethics, and even the creative aspirations of electronic minds. If you are interested in any of these things - or perhaps, none of them, that's fine too - I implore you to stick around and learn about the fast-approaching future with me.