The Divine and Felonious Nature of Artificial intelligence
Artificial intelligence (AI) has become commonplace in our everyday lives, and Deep Learning AI techniques have enabled computers to beat humans at complex games, drive cars, and even written books. In 1998, when IBM Deep Blue beat Garry Kasparov at chess, marking one of the first milestones in AI defeating humans in complex tasks. The ethical debate over AI continues to be a hot topic.
Many scientists and researchers opposed to AI believe that it is "unnatural, unethical, or too dangerous," while others argue that the technology offers many benefits for society. According to a Pew Research Center poll, most Americans are more worried than excited about artificial intelligence technologies. The poll revealed that Americans are twice as concerned (72%) as enthused (33%) about a future in which robots and computers can do many jobs currently done by people.
Science fiction is filled with stories about how artificial intelligence (AI) evolves. The Terminator and The Matrix are two well-known cultural touchstones. Fictional situations often feature a drawn-out battle against malevolent artificial intelligence (AI) or robots with human personalities. Unlike Skynet, which led to a world where humans were either exterminated or enslaved by machines, artificial intelligence is unlikely to create an either-or scenario between humans and technology. A possibility is that humans and artificial intelligence will achieve some sort of symbiotic balance in the future. Foundation, the new Apple TV+ series based on Isaac Asimov's famous book, may be an example of this type of equilibrium between humans and machines. Foundation tells the tale of Hari Seldon and his followers, who use psychohistory to plot their route through a galactic dark age and preserve scientific knowledge in a future galaxy-wide catastrophe.
In this essay, we'll look at three real-world examples of human and artificial intelligence interacting together. There are human and artificial intelligence blind spots in all three situations that open the door to the idea that machines will not dominate the future, but rather symbiosis will reign.
AlphaGo
The documentary, "AlphaGo" follows the development of AlphaGo by Google DeepMind and its journey to beat 16-time world champion Lee Sedol at the ancient game Go. The goal of Go is to surround more territory than the opponent by capturing the opponent's stones. This mental game is estimated to be 500 times more complicated than chess, with many possible combinations that would take longer than the age of the universe to explore. In October 2015, AlphaGo's defeated European Go champion Fan Hui. In Go players are rated and ranked by what is called a Dan (grade). Fan Hui was a 2-dan where the highest ranking is a 9-dan. After the match, Fan Hui joined the Google DeepMind team to help train AlphaGo. In March 2016, AlphaGo defeated Lee Sedol in a five-game match. It was the first time that a computer Go program has beaten a 9-dan professional. AlphaGo won 4 of the 5 matches. In a 2016 Wired article called "In Two Moves, AlphaGo and Lee Sedol Redefined the Future" they proclaimed
Although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own
Move 37
With the 37th move in the match's second game, AlphaGo found a way to fluster even the world's top Go masters, including Lee Sedol, with a surprise move. Move 37, to the world of professional Go players, appeared odd. In fact, several 9-dan commentators went as far as to call it a mistake. Immediately after the move Sedol knew it wasn't a mistake. He was so flustered by the move that he fled the room and took nearly 15 minutes to formulate a response. Fan Hui, who is very familiar with AlphaGo, appreciated the elegance of this unusual maneuver. The AlphaGo program had concluded that there was a one-in-ten-thousand chance that a human would make that move. The software utilizes the previous knowledge of human players to improve their skills. It understands how humans play, but it can also look at things from a different perspective when compared to humans. The probability of that Move 37 was calculated by AlphaGo as ensuring a half-point victory. The greatest Go player on the planet, Sedol, had a blind spot to the highly calculated probability of the move until after it. In some ways, Sedol was hampered by the white and black areas on the board where the color of the board and the stones were not part of AlphaGo's algorithm. It has been said that AlphaGo's Move 37 revolutionized how a 2500-year-old game was played for future games.
Move 78
The match was a best-of-five series and Lee Sedol then lost the first three games. However, the two opponents play out all five games. Sedol was determined to regain some pride for himself and the hundreds of millions of people that saw the match throughout the world in Game Four. But midway through the game at Move 78 Sedol did something that has been referred to as God's Move. Sedol made a move that AlphaGo had never been trained for. AlphaGo's next move was disastrous and went on to completely fall apart and lost game four. AlphaGo didn't know a person would be capable of making this move. Prior to this match, AlpaGo had been feed over 30 million moves from expert players. A lot of those training models were built against the 2-dan Fan Hui. However, it was never trained by Sedol. AlphaGo was strong enough to defeat the 9-dan Sedol four out of five games. On average AlphaGo would have probably beaten Sedol 999 out of 1000 moves. However, not Move 78.
Summary
Move 37 and Move 78 are great examples of the divine nature of both humans and machines. Move 37 was a calculated move that would ensure a half-point victory for AlphaGo. However, Move 78 was not calculated in the algorithm's model and it demonstrates how both humans and machines have a blind spot when it comes to unexpected moves. Flexibility is something that humans have that computers do not yet have. The flexibility was demonstrated in Move 78.
The History of Autonomous Vehicles
Today, autonomous vehicles are being developed to help human operators achieve their goals more efficiently and effectively. Autonomous military vehicles are capable of performing tasks with limited or no human input. When military commanders deploy unmanned ground vehicles (UGVs) on the battlefield, they understand that there will always be a degree of uncertainty in the system.
When you send a robot convoy to an area where humans and machines have traditionally been vulnerable, it's easy to understand why there's concern over autonomy. In fact, according to an article written by Edward A. Sanchez for Car and Driver, "One of the most striking revelations from the 2018 fatal Uber crash was that the technology at issue wasn't built to be a self-driving system. The Volvo XC90 sport utility vehicle involved in the accident relied on a series of sensors, lasers, and other hardware from industry leader Velodyne Lidar."
A study conducted by researchers at Rice University and Texas A&M University found that autonomous vehicle collision avoidance systems were not designed with pedestrians in mind. The study revealed that the "automated system was at least twice as likely to collide with a pedestrian, compared to when the person was using traditional human-driven controls," according to an article written by David Stout for The Christian Science Monitor.
At an emerging technology conference, I once heard a fascinating story about a prototype autonomous vehicle that had a specific bug. When the vehicle saw a yellow sign above a right turn, it would frequently make incorrect driving decisions. Errors in artificial intelligence programs are more difficult to troubleshoot than conventional software applications. Self-driving cars are data-trained, with billions of miles of experience in self-driving neural networks that execute more than 320 trillion operations per second. To make matters worse, linear debugging is out of the question. The creators of the self-driving vehicle enlisted the help of experts from all over the world to try and solve the issue. It was actually one of the maintenance staff who suggested the solution. It turns out that every day, the autonomous vehicle's creators would send out about 25 vehicles and they all would return the same night at dusk. It's just a coincidence that the cars needed to make several right turns in order to return to the development site. In its training vehicles model, the combination of twilight glare against yellow traffic signs on a number of right turns created unintended consequences.
Summary
Although there are many benefits of autonomous vehicles, it's important to remember that the systems are not perfect. Computers don't always make logical decisions as humans do, and sometimes they make mistakes with deadly consequences. The twilight glare against yellow traffic signs scenario is an example of another artificial intelligence blind spot where humans and machines will have to continue to adjust their advantages.
OpenAi and GPT-3
OpenAI is a non-profit artificial intelligence research company. The founders of the company include Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, Wendell Wallach, and Jaan Tallinn. Before the creation of OpenAi, these individuals had already invested in AI firms including Vicarious Systems, Inc., Numenta, DeepMind Technologies, and D.Wave computers.
Researchers at OpenAI aim to develop highly advanced AI technology in a way that is beneficial to humanity. As such, the study not only how to create an advanced artificial intelligence itself, but also how to keep it safe and beneficial on multiple levels. The team looks for ways to create AI technology that can work together with people, to benefit society. They are specifically focused on artificial intelligence techniques that are "human-level" or significantly better than humans at "general aspects of intelligence". OpenAI is also working on creating new AI hardware that would be safe for the future, as well as developing highly advanced AI technologies that focus on human cooperation, rather than destroying humans.
OpenAi has been increasing its pace of research and development over the past few years. In 2018 the company launched a new artificial intelligence team called GPT-2 . GPT-2 is a text generation program that learns to write from examples. OpenAI created it by copying the structure of their previous program, GPT. This new text generation AI was designed to be able to generate paragraphs. After receiving input according to certain rules, GPT-2 can create its own writing based on what it has analyzed. GPT-3 is an artificial intelligence system that focuses on generating coherent paragraphs and full sentences. GPT-3 can write with nearly 94% accuracy in English and with 92% in Chinese. It also wrote a horror story that made it onto the front page of Reddit, which garnered over half a million reads. The GPT-3 system has been used by OpenAI to generate new blog posts essays and even books.
An OpenAi recent announcement
Our best model is fine-tuned from GPT-3 and generates sensible summaries of entire books, sometimes even matching the average quality of human-written summaries: it achieves a 6/7 rating (similar to the average human-written summary) from humans who have read the book 5% of the time and a 5/7 rating 15% of the time. Our model also achieves state-of-the-art results on the BookSum dataset for book-length summarization.
GPT-3 is currently being used for a number of projects.
Microsoft uses it for its new CoPilot product to translate conventional language into formal computer code.
A new tool called AI Writer, which allows people to correspond with historical figures via email.
The Guardian wrote an article about AI being harmless to human beings. It was fed some ideas and produced eight different essays, which were ultimately merged into one article.
AI Dungeon uses it to generates text-based adventure games.
Summary
I just attempted to write a brief 1800 word blog post about a well-known historic figure this weekend using a GPT-3 tool. I was able to produce a publishable ready blog article in about 45 minutes using GPT-3. Without a GPT-3 tool, it would have taken approximately 3 hours to complete this same task, and it wouldn't have been publishable. Even then, I would have had to start collaborating with one of my editors to enhance it until I was satisfied it was ready for publication. The 45-minute version was finished and I didn't need to submit it to a copyeditor. A GPT-3 tool was used to assist this article. I saved about 50% time using a GPT-3 tool while writing this article. However, this essay required a lot of research since I had to do more tinkering and fact-checking to assure its accuracy. If you don't provide the GPT-3 model with adequate data, it can produce a lot of nonsense. However, it is capable of sometimes producing 500 perfect to-the-point words on occasion. The GPT-3 algorithm has crawled about 10% of the internet, with a deep learning network based on a neural network. The bottom line is that this is another great example of artificial intelligence and human excellence working together to create outcomes greater than its parts.