Setting the Story Straight on AI
By Mike Hinchey, President, International Federation of Information Processing (IFIP)
In the movie Ex Machina (2014), a technologist creates a robot that is so lifelike and self-aware that she can pass for human. When her inventor tasks a young programmer with testing the limits of her capabilities, she surprises both men with her capacity for creativity and deception.
While this makes for an entertaining storyline, the reality is that we’re still a long way from seeing robots of any real sophistication when it comes to true artificial intelligence (AI). Articles in newspapers and on media sites would have you think otherwise, often positioning robots, machine learning systems, and even algorithms as different types of AI. However, while these kinds of systems are developed while working on AI research and development, they don’t display any real intelligence.
If a system has been programmed to operate automatically, then it simply repeats the same series of processes over and over again. Even systems that have been trained to identify visual prompts or to use pattern matching to write original music, do not exercise any real creativity or original thought.
While the human process of writing music might involve inspiration, creative thinking or experimentation, a computer simply applies a series of patterns that have been deemed to be liked in order to develop the musical form. It has no real appreciation of the music itself in the way a human does – no ability to feel emotion or even enjoyment. It simply follows an automated process to put together notes and patterns in a way that is predicted to meet recognised standards for musicality, structure, and harmony.
That’s not to say that we won’t see a day sometime in the next few decades when computers do display the capacity for musical appreciation but we’re nowhere near there yet. There are some robots in the market that claim the ability to engage in conversation, however those that I have seen are clearly scripted with standard comments which they offer as part of a programmed response.
We need to be more accurate in how we talk about AI and challenge inaccuracies when we see them in order to better manage expectations
Back in the 1950s, Herb Simon, who is today known as one of the Fathers of AI, told a group of students that he and his colleague, Alan Newell had invented a computer that thinks over the weekend. While Simon’s work with thinking systems laid the foundation for the AI systems that followed and even earned him a Turing Award (he also earned a Nobel Prize for Economics), it took much longer than he predicted for real AI functionality to be developed.
Simon expected a computer to beat a human at chess in the 1960s, but it took until 1997 before IBM’s Deep Blue defeated chess champion Garry Kasparov. The early expectations around AI actually gave the technology a bad name as it failed to deliver on the predictions that were made. This led the US Government and other major players to significantly reduce their investments in AI research, further delaying advances.
Even though we are much further along the road today, the buzz that is being generated around AI is creating similar unrealistic expectations that will lead to inevitable disappointment.
I was on a teleconference with a group of finance and technology experts recently and one participant made a number of uninformed claims about AI systems. I challenged his statements and was gratified to have several other experts agree with me. However, all too often, I see similar comments being made in the media and online, treating them as gospel truth. As technologists, we need to be more accurate in how we talk about AI and challenge inaccuracies when we see them in order to better manage expectations. Just because an application is smart – as in it uses programming and business rules to perform clever and useful functions –does not make it intelligent.
To call Apple’s Siri an AI is ridiculous when the app clearly just matches keywords and phrases against a database of geographical and user information, using speech synthesis to feed information back to the user. Anyone who has used the app even a little will be well aware of its limitations. The app will undoubtedly improve over time with increased processing power and programming enhancements, but I suspect we are still many years away from Siri being a full functioning AI system. Back in 1951, Alan Turing proposed a test he called “The Imitation Game” (now commonly known as the Turing Test) which was designed to measure a computer’s ability to demonstrate intelligent behaviour indistinguishable from the way a human might behave.
While Ex Machina’s “Ava” certainly passed the Turing Test in how she interacted with her creator and the young programmer brought in to assess her, I’m not aware of any system in the real world that could do the same.
One day maybe, but not yet and probably not soon.