Imagine this: a person who you could always talk to about anything at any time, and one who could understand exactly how you feel. This person would also remember everything you ever told him or her and would remind you when you needed to do something.
That's the dream of many engineers and scientists today; to create a fully functional Artificial Intelligence or AI for short. This dream has been worked on since the invention of the computer, and in many different forms. If you have an iPhone, you may occasionally use the built-in AI named Siri, and if you have an Android phone, Google Now is your AI. Heck, even Windows Phones have an AI in them, Cortana, but it uses Bing.
I personally believe that these personal AI's are really cool, and I use Google Now daily. They are extremely helpful for asking questions about things that I don't know, or just setting a reminder for a couple days in the future. AI is a big part of technology, and it seems that it is shaping up to be the trend of the year and maybe the next couple too.
The dream of a fully sentient AI, though, gives me pause. If we create something like that, could we necessarily control it? Science is unsure of this, and so am I. There are countless ways in which an AI could go rogue, like in the Terminator movies.
I believe that we should take a firm stance on AI, and continue its development, but also be sure to keep ourselves grounded, and not pretend that we can control everything we create.