Futurism is powered by Vocal.
Vocal is a platform that provides storytelling tools and engaged communities for writers, musicians, filmmakers, podcasters, and other creators to get discovered and fund their creativity.
How does Vocal work?
Creators share their stories on Vocal’s communities. In return, creators earn money when they are tipped and when their stories are read.
How do I join Vocal?
Vocal welcomes creators of all shapes and sizes. Join for free and start creating.
To learn more about Vocal, visit our resources.Show less
Artificial intelligence has been the goal of humankind since the dawn of the computer age. As each year passes, we depend on computers more and more. This over-dependence leads us to create an abundance of machines capable of computing. The only end goal in this scenario is the actual replacement of human intervention. Artificial intelligence is not just a simple notion of what we would like to have. It has become part of who are and our eventual progress into the next technological age.
Defining Artificial Intelligence
The actual definition of artificial intelligence seems to be in question. After all, what makes humans intelligent? Some would say it’s our ability to rationalize thought. Essentially, computers meet this definition in a lot of ways. Computers are built on logic and follow a strict set of rules they are given. However, they are limited in that capacity because they cannot go beyond that. They are limited to only what they are given. Rather, a better definition comes from John McCarthy, who actually coined to term “artificial intelligence” in 1955. Based on that definition we get "a system that has the ability to gather data from the environment and can make decisions that lead to a better survival rate."
That definition is flawed. We do not create machines for mere survival. The previous definition came to us by looking at what makes us intelligent. This doesn’t necessarily mean we can apply that meaning to what makes a machine intelligent. We create machines to do functions that would otherwise be more costly or dangerous to employ humans. We also create them for recreational use and for serving us humans in ways that make our life easier. So, it would be better to consider a robot intelligent if they have the ability to gather data from their given environment and make decisions that support their given purpose or function.
The logic behind the new definition is simple. Humans are broken down into their respective jobs and functions. A professional tennis player wouldn’t make a good soldier. However, we still consider both the tennis player and the soldier intelligent, even though we accept they are lacking in full decision-making skills in areas that are not assigned to them. The same can be said about robots. Robots can be considered intelligent if they have the ability to learn from their environment without the need of a human teaching them those skills.
When we consider artificial intelligence, we often have a flawed concept of it. We often picture something from AI: Artificial Intelligence or Bicentennial Man. In those films, it displays a robot with artificial intelligence. Actual artificial intelligence does not actually require the presence of a robot. We, as humans, display it this way because we view artificial intelligence as an extension of ourselves. Humans project this image in a way that is familiar to us. This means presenting AI using the human form. However, it is possible to still have virtual artificial intelligence.
In the 1980s, science seemed to be on the cusp of a major breakthrough in artificial reality. Around the 1980s, Charles Philip Lecht is said to have come to the thought, "What the lever was to the arm, the computer is to the brain." This has profound meaning in that the human body can be viewed as a machine. Essentially, it is a bio-mechanical machine running off of electrical impulses from a central processing unit we like to call the brain. In simpler terms, we can duplicate the human body given the knowledge and resources.
Lecht was a mathematician and a business man who founded Advanced Computer Techniques in New York City in 1962. Born in Providence, Rhode Island and educated at the Jesuit Seattle University and Purdue, Lecht worked at IBM and MIT's Lincoln Labs for a while before starting Advanced Computer Techniques. He grew depressed watching a spirited company change into a "bureaucracy of yuppie nincompoops" and in 1982 left to found Lecht Sciences, a think tank and "creative" lab. In 1985, he moved to Tokyo to open Lecht Sciences Japan in a country, Lecht says, that "is long on brains and short on lawyers." Lecht's views on artificial experience grew out of his ruminations on what would succeed artificial intelligence. While Yale's AI guru, Roger Schank, envisions artificial intelligence primarily as an easier way to gather or retrieve information in a database, Lecht believes the discipline can be used to create as well.
His company started off by creating compilers for computer languages. During that time, the languages that were prevalent were FORTRAN, COBOL, and Pascal. While Lecht made considerable contributions in the computing field, his major contributions with artificial reality and intelligence seem to be in concepts. "In the next century," contends Lecht, "we will be able to play out scenarios of our own imagining and, having done so, turn them off without physical risk or harm. As surely as gravity exercises its tow on Earth, so the artificial experience will tow our minds into a world of visibility and vision." While a simple statement doesn’t seem to be much in the way of advancement, it does spark vision and vision is what moves us from going from thought to actions. In a way, he introduced us to the concept of artificial reality.
The key to creating artificial experience is the union of the computer and the hologram. By combining powerful artificial intelligence (AI) software with three dimensional holographic projections, it will be possible, Lecht believed, to generate manufactured images of our own choosing. This "virtual reality," as Lecht referred to it, will unfold gradually. First, programmers will learn to project two-dimensional holograms onto giant screens that cover the walls, ceiling, and floor of a room. By the year 2000, Lecht anticipated "three-dimensional projections so real that our senses will believe the holograms to be the 'real' environment."
Lecht’s prediction was not that far off. In fact, it was almost dead on. The technology exists today. The technology is called Omote and it boasts the ability of real-time face tracking and projection mapping. In one demonstration video, it has the ability to map a subject’s face and project an image that makes the subject's face appear to be robotic. The projection actually moves as she moves her head and the video looks real. Rather than just projecting an image on a wall, the technology exists today that projects it on human skin. Another example is called the Circret bracelet. This technology projects a tablet on human skin, as well. Owners can project the computer screen on their wrist.
Artificial Intelligence Fields Advance
Lecht believed that artificial intelligence research is moving too slowly for the futurist. "Most researchers," he claimed, "labor under the delusion that they know what real intelligence is, although they cannot provide us with a definition or name an object that possesses it. Deeds or actions, the 'output' of human beings, direct us to conclude whether or not they are intelligent. But intelligence hasn't kept us from making war, a decidedly unintelligent thing to do. We need experience, either real or artificial, as well as intelligence. Humans do learn something after a real war is over, but by then, it's too late. Yet perhaps the artificial experience of a war, a simulation so real that we believe it is the actual event, would give us a profound understanding of it and would help to change our behavior." Flight simulators, believed Lecht, are the closest thing to artificial experience in existence.
Artificial intelligence has made many developments due to research in war. Simulations provide us the ability to experience things within the confines of a virtual environment. The nuclear detonations in the West marked the land with radioactivity. It wasn’t practical to continue to test weapons in that way. That is where virtual reality and intelligence come into play. In 1996, it was reported by CNN that New Mexico’s Sandia National Laboratory created a virtual environment to actually test weapons. They wanted to test the aging nuclear devices that were stored over from the Cold War. By setting those bombs off virtually, we could predict their power and radius. We could then use that data to create safe solutions to store those nuclear weapons.
Looking back, the definition and evolution of artificial intelligence still stands. We utilize artificial intelligence and virtual reality as a way of supplementing what would otherwise be costly or dangerous to human beings. This technology advances because of its current lacking ability to meet those goals. Because of necessity, we advance the technology to the point where we need it to be.
Artificial intelligence has advanced for another reason: gaming. Gaming is now a multibillion dollar industry. The reason gaming needs better AI is simple. There are characters that can interact with you in a virtual reality. Think of Fallout 4 which is a recently released game. In it, players have the ability to roam a vast open world where they can go wherever they want. There are hundreds of characters that you can interact with. Artificial intelligence makes those interactions more real and believable. Once again, the demand for artificial intelligence is growing.
Lecht’s Future Hopes
Lecht's hope is that once the artificial experience has been developed, the "real" will be exchanged for the "virtual," and humans will enter a new evolutionary stage in which ownership—the drive to possess the irreplaceable—will dissolve. "When something like a painting by Rubens or a sculpture by Henry Moore is delivered by a holograph," he says, "we can appreciate it purely. When we tire of it, we can create another holographic objet d'art that pleases us." In simpler terms, works of art are priceless because there is normally only one true piece. The downside to this is that fewer people are able to really experience the art. Through virtual reality, people are able to experience more. The fight over owning those pieces may decrease because more people are able to fully experience the work.
While this vision may seem bad for the artist, this technology has advanced at an alarming rate recently. With the release of Oculus Rift in 2016, it is the new standard in virtual reality. What started out as a Kickstarter project has grown into an innovative wave of virtual reality. As the first personal VR headset, it offers a 90htz refresh rate which is equal to that of what the eye sees light on average.
The applications of Rift go far beyond just gaming. In other areas, developers are currently showcasing an application allows people to watch movies in a virtual environment. The environment is actually be a simulated theater. However, people can connect with one another and watch the same movie together in the same theater. They can see and interact with one another through the use of avatars. Other uses of virtual reality include training soldiers. Much the same way weapons are tested virtually; soldiers can be trained virtually to give them realistic training experience without the danger of live ammunition.
Where Virtual Reality Meets Artificial Intelligence
The subject of virtual reality and artificial intelligence have both been talked about. With the subject being artificial intelligence, where does virtual reality meet or support artificial intelligence? It has already been pointed out that it is not required to have a physical representation of artificial intelligence. In other words, robots are not required to have artificial intelligence. Artificial intelligence can exist purely virtual. This is where virtual reality comes in. To be able to interact fully with an artificial intelligence, a virtual reality may be required. Another way to look at its interface. Virtual reality exists to give humans an interface with an artificial intelligence. As technology concerning artificial intelligence grows, it can be predicted that technology concerning virtual reality will advance as well.