Deepmind is a company devoted to creating artificial intelligence based on neural networks. These neural networks are the virtual equivalent of neurons in the human brain.
Deepmind's aim is to create an AI that's a little closer to humans in thought process so they applied these neural networks in the hope that the AI can work similarly to a human brain.
"We’re on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how." (Deepmind website, 'about us' page)
The company was established in 2010, and then later in 2014 was acquired by Google (now a conglomerate by the name "Alphabet Inc.").
The company has come far through their quest to create such an AI, and they have tested it in many game environments to see how it can learn to play these games and how it behaves in such environments.
"By implementing our research in the field of games, a useful training ground, we were able to create a single program that taught itself how to play and win at 49 completely different Atari titles." (Deepmind website, 'about us' page)
These game tests show that this Deepmind AI has the capability to learn how to play these virtual games by teaching itself, just as how many of us learned to play games in early childhood.
We all remember when we first played video games in our old PlayStation, Sega, and Atari, or whichever console we had first; the experience of learning was through trying and failing but we enjoyed every moment. Now this AI can do the same which can be said to be an amazing advancement in technology, and maybe even understanding how we ourselves behave and learn. This can also be alarming as we all watched too many science fiction movie to know that an intelligent machine isn't always a good thing for humanity.
Deepmind wasn't only tested using these old video games, it was also put to the test in games created specifically to examine certain aspects of the AI that tests whether it's way of thinking is similar to humans or not.
The first test was in a simple scenario. Collecting apples from a pile.
The researchers created two versions of the AI to compete against each other in this competition (blue and red).They were to compete against each other to see which would collect the most "apples" (green pixels in the video). However it wasn't a simple collection because that would merely test who was the fastest at collecting; the real test comes in when each of the two versions was given a 'laser gun' and if one of the two gets tagged 2 times in a row, they are temporarily removed from the game thus giving the attacker more time to collect the apples. This variable adds to the test's dimensions because now the thought process and "humanity" of the AI can be tested.
Through doing the test thousands of times and changing the variable (number of apples), the researchers found that the two versions of AI behave differently in accordance to the number of apples (resources) available. When there was plenty of these apples, the two versions were happy to split the apples as the difference between how many each one got would be negligible thus they chose to coexist peacefully. Like splitting a £billion with another person; half of a billion is much more than enough to survive.
However when the number of apples was smaller, one of the versions would always "tag" the other and take the apples for itself.
Imagine a zombie apocalypse where resources like food and shelter are scarce, wouldn't most people fight to the death over them, either for the sake of their survival or survival of their loved ones? Of course they would. Self preservation is an instinct every human has; but when a computer has the same instinct, a computer that cannot process emotion (yet?), thinks to preserve itself over others, is it the same as a human's thought process?
The second game designed to test the AI was a little different, its intended goal wasn't simply two versions of the AI competing against each other.
The game was called Wolfpack. The two versions of the AI are out in a situation where there was 1 prey they needed to hunt. Here, the rules were different; when the pray is hunted down by either of the AIs, all AIs in the "hunt zone" would get a reward.
A lone wolf can capture the prey, but at risk of losing the carcass to scavenger. However when the two wolves capture the prey together, they can better protect the carcass and hence receive a higher reward.
This shows that the two AIs can recognise the benefits of cooperation.
These two game environments are good at mimicking situations where normal humans can come to a dilemma. Watching post apocalyptic shows and movies, which one of us didn't think about what they would do if they were in the main character's shoes? We all thought about whether we would be logical and selfish to preserve our life, or maintain some sentimentality of being human and choose to cooperate.
Deepmind AI's approach was rather logical when it came to the dilemmas it was presented, as was expected. The impressiveness of the AI wasn't the outcome alone, it was realising for itself how the games affected it and acting accordingly.
Seeing such advancement in artificial intelligence, is it a good thing or a bad thing? Such advancement in intelligent machines can certainly mean amazing things for humanity in fact, Deepmind works with the UK's NHS in testing Deepmind's application to healthcare.
However on the flip side, there's always the question of whether AI would turn on us. Would machines determine that humanity is a detriment to earth and natural life and exterminate us like many movies show? Or maybe enslave us?
Seeing so many advancements in technology every day, these questions maybe answered sooner than any of us had hoped.
Let's just hope we create a switch off button for when things turn bad.