Futurism is powered by Vocal.
Vocal is a platform that provides storytelling tools and engaged communities for writers, musicians, filmmakers, podcasters, and other creators to get discovered and fund their creativity.
How does Vocal work?
Creators share their stories on Vocal’s communities. In return, creators earn money when they are tipped and when their stories are read.
How do I join Vocal?
Vocal welcomes creators of all shapes and sizes. Join for free and start creating.
To learn more about Vocal, visit our resources.Show less
What does the future look like to you? Perhaps you see quantum computing as the next big leap or envision a future powered by cold fusion. Maybe you think Artificial Intelligence will be the biggest game changer of all, but what you probably don’t have in mind is the most dated piece of hardware we have, a processor 100 million years old: the human brain.
Computers and brains already have a lot in common, they both process external information and give an output. There are also plenty of tasks which can be accomplished by both like arithmetic and, increasingly, pattern recognition. But throughout our history with machines, we have been separated by a very fundamental difference. Whereas computers use logic gates, transistors and diodes, we use organic neurons.
Things began to change back in 1971 when circuit theorist Leon Chua came up with a new idea for computation based on the idea of a resistor, an electrical component which “resists” a current of electricity and slows it down. Chua conceived of a component which would “remember” the last current it received and change its resistance accordingly, a so-called “memristor”. The result would be a dynamic component which could change over time and which would open up the potential for computer chips to operate on an analog system similar to the human brain rather than the conventional binary system used in most modern computing.
Today these memristors are being used in a cutting-edge field called neuromorphic engineering which aims to create computational systems modeled after the human brain. Memristors and other components are used to model the function of an individual neuron by copying the neuron’s ability to strengthen and weaken its connections, the same process which is responsible for learning and memory. Researchers and tech companies from all around the world are producing and trialing their own neuromorphic chips. Although none of them are perfect, they all seem promising.
Needless to say, this isn’t just some university undergrad’s pet project. IBM, Intel, and several other tech companies have already begun R&D, each has their own neuromorphic architectures and projected industry growth looks promising, but most promising of all is how much more efficient these chips can be for running A.I. and neural networks.
A good example of how inefficient our older A.I. systems are is the infamous Go tournament between Google’s “AlphaGo” and grand-master Go player Lee Sedol. When AlphaGo managed to beat the human world champion at his own game it did so using a very inefficient computational system. While Alpha Go’s human opponent was operating at 50 Watts, roughly the power consumption of a light bulb, the A.I. was using 1 million Watts for the same task.
Neuromorphic computer chips, on the other hand, are even more efficient than the human brain while simultaneously providing faster computation, just take Tsinghua University’s latest chip which can run for a year on only eight AA batteries or this new chip from the National Institute of Standards and Technology (NIST) in Boulder, Colorado. In addition to being more efficient, Neuromorphic chips could potentially have neural networks built directly into their hardware making them a very attractive hardware alternative to our current software for A.I. systems.
But wait, there’s more! These chips also provide viable means of downsizing the size of our hardware meaning that computers which would take up a whole desk or even a whole room may soon exist in an individual computer chip. This goal was recently expressed by MIT researcher Jeehwan Kim who remarked:
“Ultimately, we want a chip as big as a fingernail to replace one big supercomputer … This opens a stepping stone to produce real artificial hardware.”