The short answer would be yes. But let me start out by asking you a question. Have you ever seen Terminator? Doesn’t have to be all of them, just one of the movies. The thought might have struck you, “Will robots conquer the world?” While AI and today's technology can be quite controversial to talk about, we need to do more than just talking about it… We need facts, easy to understand facts that can be explained by and to every “non-tech” person.
Should we be scared of AI development? Well… This is the part where things like certain NGOs (Non-Governmental Organizations) like OpenAI comes in. A company made to secure a safe development of artificial intelligence, where people like Elon Musk and Jessica Livingston have helped sponsor the project. Jessica’s Livingston's company Y Combinator is also sponsoring OpenAI.
"One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior."
-from https://blog.openai.com, Learning from Human Preferences
Elon Musk has said himself that we need to be careful with the development of AI systems, which are advancing rapidly. Nevertheless, his corporation Neuralink tries to develop “brain-machine” interfaces, that will allow humans to establish a connection with computers. But hey, you have to trust him, the man is after all the current savior of the world. A solid example of AI development can easily be found in the company Tesla. “It’s just a self-driving car, what’s so special about that?” I don’t think any of you actually thought that, but just in case you did. Tesla uses self-learning AI to make the autopilot safer. A few years ago Tesla cars relied more on the optical camera built-in to the car. But since Version 8.0 it starts collection data of how drivers behave at certain GPS locations and use that information for future decisions.
Tesla Model S
The cars are of course made so that if it detects an obstacle it’s gonna say “Oh hey, I'm breaking, that is dangerous, I just saved you, no reward? Okay.” It doesn’t actually say that. The cars can, however, be fooled by a low bridge or an overhead sign, and might even hit the break if it’s on autopilot. That's why they created this list called “geocoded whitelist” where they would get information of other Tesla cars and see how people behave… if they break or don’t break. If they start drifting as if it was Fast and Furious, it’s gonna take that data and if people don’t slow down and just continue safely, it’s gonna take the location and add it to the “geocoded whitelist” thingy.
I was watching a clip of a Tesla car predicting a crash before it even happened, but nevermind. I just found a compilation of crashes the Tesla car avoids.
Tesla Autopilot Predicts Crash Compilation [Not Graphic]
We really need to go back to the main question...
AI development can be dangerous if it's in the wrong hands! I heard a great example of what would happen with self-learning super intelligent AI if we didn't control it. I believe it went something like this:
"When you receive spam e-mail, the most common thing to do is to delete it or block it. The same thing will happen with AI if it starts seeing us as a problem."
To sum it all up, AI development could be a potential danger in the future, therefore we should try to find a stable and safe way to develop superintelligent AI.
With that, I leave you with one last comment... The Terminator movies are amazing!