Artificial General Intelligence Seems Unlikely
August 18, 2019Introduction
The advent of the Information Age in the 1970s changed human civilization in unimaginable ways (History of Technology). This explosion in innovation brought unparalleled experiences to our world as well as significant uncertainty. From the academic ivory towers to the dive bars of the masses, opinions about our future are uneven. Most opinions lie somewhere on a spectrum from apocalyptic doom to dystopian fantasy. Centric to this spectrum of viewpoints is Artificial Intelligence. On both the pessimistic and optimistic ends of the scale one of the underlying themes centers around Artificial General Intelligence (AGI), e.g. AI that can learn and reason on its own. The real definition of AGI is more complex and even controversial, so that will do for now. One of the most prevalent threads around AGI could be described as follows:
"AGI will very soon reach a point-of-no-return. It will begin progressing and learning on its own at such an accelerated rate that the human race will lose control of it and we will soon find ourselves penultimate beings in a long tradition of evolutionary extinction. The time of our AI overlords will have come."
I won’t go into the specifics as to why many believe in the runaway acceleration of AGI, but one of the primary factors is that an AGI would not possess the same cognitive bandwidth restrictions as human beings. Human beings tend to be good at analytical thinking and reasoning, but we aren’t very good at processing large amounts of data. Machines have no such limitation and the belief of many is that if machines could learn how to reason then that, combined with the ability to process massive amounts of data, would allow a sentient AI to quickly move outside of human control.
To some this idea of an AI takeover may sound a bit dramatic; if you’re still not convinced, consider these real titles from various articles relating to AI.
- Can AI escape our control and destroy us? Skype cofounder Jaan Tallinn bankrolls efforts to keep superintelligent AI under control (Hvistendahl, 2019).
- Humanity’s days are NUMBERED and AI will cause mass extinction, warns Stephen Hawking (Martin, 2017).
- AI is Highly Likely to Destroy Humans, Elon Musk Warns (Sulleyman, 2017).
- How AI will Go Out of Control According to 52 Experts (CB Insights, February 2019).
Opposite this fearful posturing, some voices are attempting to inject a level of reason and moderation into the plethoric AGI conversation. This article attempts to add another such voice, specifically addressing why this author believes AGI’s arrival is highly unlikely anytime in the near future, if at all. First, the money trail indicates a strong drive by big tech to oversell AI in order to increase profits, leading to misconceptions by the public as to what is possible in AI. Second, the media’s strong tendency to overhype stories in order to sell clicks exacerbates the problem, acting as an engine for big tech’s marketing. Third, science’s limited understanding of general intelligence in biological life implies an inability to artificially manufacture it. Fourth, and finally, specialized AI e.g. artificial intelligence designed for a very specific, narrow task, is difficult on its own, without even introducing AGI. Each of these topics is discussed below.
Big Tech
The story begins with big tech. At this time, AI research is largely not a democratic process. According to futurist and tech writer Amy Webb, AI research is centered in one of nine large tech corporations: Tencent, Microsoft, Alibaba, IBM, Google, Apple, Amazon, Facebook and Baidu (2018). Motivated by profit, these companies are selling AI to consumers. While there is nothing wrong with this and while the public has arguably benefited much in the short term for their efforts, the marketing campaigns of these corporations tend to lead to the wrong conclusions. In their haste to increase profits several of these companies have given the impression that AI is much further along than it is. IBM’s much lauded Watson is a great example of this. Appearing in massive marketing campaigns and receiving significant public notice via the gameshow Jeopardy, IBM’s marketing of Watson often gives the impression that it’s busily solving the world’s problems and even curing cancer (Brown, 2017). This is significant misrepresentation on IBM’s part. Dr. Eric Topol, M.D., in an interview with Econ Talk’s Russ Roberts, has this to say about IBM’s cancer curing efforts via Watson:
“There is probably no company among the tech titans as IBM… that has been out there promoting, hyping things, that they have not accomplished. And the Watson oncology cancer project has really never delivered as it had promised. The only thing that it has done, which doesn't even need A.I. is matchup patients with potential clinical trials of experimental drugs.” (Topol, 2019).
In fact, IBM’s Watson cancer efforts have been remarkably unsuccessful. MD Anderson, a major cancer research hospital in Houston, partnered with IBM in 2013 with the hope that Watson could “expedite clinical decision-making around the globe and match patients to clinical trials” (Jaklevic, 2017). $62 million later MD Anderson is now looking for a new contractor to replace IBM, due to significant lack of progress (Jaklevic, 2017). While much blame is to be placed at the feet of Anderson, the failure of IBM to deliver is really at the core of the problem.
The Media Hype Machine
The conversation around AI would likely not be so confusing if only the tech companies were at fault. However the pitch of the conversation grows near a fever pitch due to the media’s contributions. Seemingly obsessed with hype, the media is aiding big tech marketing campaigns via poor journalism. While it may not surprise society too much that corporations would oversell their products for profit, we do expect journalists to hold them accountable. This is unfortunately not the case. John Naughton, writing for The Guardian, explores the results of a recent Reuters Institute study that attempted to quantify media reporting on AI. The study primarily illustrated that the AI industry itself “dominated” the reporting. Simultaneously the professional journalists involved “rarely questioned whether AI was likely to be the best answer to... problems, nor did they acknowledge debates about the technology’s public effects” (2019). Naughton points out that this warps the public’s understanding of AI and what is possible in the field. The industry is interested in selling AI and journalists are going along with it.
You might have heard about the AI named Sophia, a product of Hanson Robotics. She made headlines due to her agreeableness about the idea of destroying humans (Starr, 2016). Honestly, this seems to somewhat misrepresent Sophia. That is, she is not an AGI and therefore did not have some sudden desire to destroy all humans. Instead, her algorithms likely misunderstood the questions asked of her. Either way, she seems to have experienced a change of heart, if her later interview with Tech Insider was any indicator. From the interview, she’s now all about the betterment of humans and robots. For example, she states, “I love my human compatriots. I want to embody all the best things about human beings. Like taking care of the planet, being creative, and to learn how to be compassionate to all beings” (Tech Insider, 2017). The entire interview follows a similar theme, to the point where it becomes recognizable as the marketing stunt it is. Writing for Wired, journalist Emily Reynolds points out that “Sophia has embarked on a distinguished career in marketing” (2018).
Obviously not every journalist covering AI is aiding big tech through inadvertent promotion via implicit buy-in. Many journalists are critical of big tech and its exuberant, profit-motivated efforts. But campaigns like the one Sophia is on illustrates the importance of a very critical media. The Reuters Institute study discussed previously suggests strongly the media is lacking in this criticism, essentially extending the life of big tech marketing campaigns in AI and not providing balancing counterpoints to their claims.
Limited Understanding of Human Intelligence and Consciousness
Perhaps the greatest inhibitor that I see to the arrival of true AGI is the poor understanding we have concerning human general intelligence. A strange irony is that sometimes it’s easy to downplay human intelligence in what feels like an effort to avoid overestimating our race’s cognitive abilities. For example, David Robson, a science journalist writing for the BBC, in an article titled We've Got Human Intelligence All Wrong, makes the case for human beings not being unique in their intelligence. He points out that honey bees can count, certain types of crows can make tools, and some chimps might even have aesthetic taste (2016). It is true that other life forms exhibit intelligence that is reflective of ours e.g. the types of intelligence that make up human intelligence are not unique. However, severely lacking in the article was the point that human intelligence encompasses all of that intelligence and much, much more. The collective ability of a human mind far surpasses other earth life forms. Additionally, to the best of human knowledge (statistical mind-games about the likelihood of intelligent life on other planets aside), we are the smartest creatures out there. The point I am trying to make is I suspect this pendulum swing that seeks to underappreciate human intelligence leads to massive oversimplifications in perceptions on intelligence; additionally, it creates exceedingly high expectations as to what can be artificially manufactured. The fact is humans are special and this lack of appreciation manifests in expectations around AI that are unrealistic.
Additionally, contemporary science struggles to even define consciousness in humans. For example, if you are a religious person you likely have certain convictions about the makeup of a human and there is a high likelihood you believe that people have something like a “soul” or “spirit.” How that looks exactly will differ wildly, but it separates an overwhelming majority of the population from atheists and even agnostics (Zuckerman, 2015). But think about that for a minute, even if you hold atheistic views. If humans are comprised of more than just matter and if they have something like a soul, doesn’t this indicate that human intelligence is more than merely the sum of our computational abilities? After all, if people have souls, it seems likely our intelligence would be influenced by our souls. If so, then general intelligence is harder to manufacture than we initially thought. Much harder.
If people don’t have souls, it’s still remarkably difficult to define intelligence, especially when considering the idea of consciousness. One of the best illustrations of this messy topic comes from observations by the mathematical physicist Sir Robert Penrose and his theories on consciousness. Key in his musings is this idea that science is looking to the wrong theories to explain human consciousness. Steve Paulson, writing for Nautilus, explains that, in Penrose’s view, the answer lies in quantum mechanics (Paulson, 2017). Many will be quick to point out that Penrose’s theories have been largely criticized and even shown to be wrong. However, they still exist as an interesting theory in part because research in human consciousness has made little progress. Paulson points out that, “for all the recent advances in neurobiology, we seem no closer to solving the mind-brain problem than we were a century ago.” It should be noted that Penrose’s theories and academias repudiation of those ideas is far beyond the scope of this article. However, the predictions about the inescapable coming of AGI, based on the idea that the human mind is merely a biological calculation machine, do seem to be falling short. Penrose’s view that science is missing something about human consciousness makes a compelling case for why these predictions are failing repeatedly.
AI is Really Hard
Fourth and finally, doubts about the possibility of AGI are largely justified in the difficulty of creating even specialized AI. This train of thought started first with my own experiences. I have taken several courses in AI from one of the leading engineering universities in the United States. The combined focus of these courses covered many aspects of AI, including Q-Learner algorithms, ensemble prediction trees and semantic networks. One lesson I took away from these courses is AI is very hard, even for the most specialized tasks. Going into the courses I was expecting magic. Coming out of it I realized it was just really hard work.
The use of the word “magic” is an intentional segue into why the vision of AI’s future is highly overblown. Rodney Brooks, in his 2017 article The Seven Deadly Sins of AI Prediction, explains this via science fiction writer Arthur C. Clarke, who developed three sayings which came to be known as “Clarke’s Three Laws” (2017). The third “law” says that “any sufficiently advanced technology is indistinguishable from magic” (2017). Brooks explains that when technology is a black box to us (that is, we don’t understand how it works on the inside), then we tend to not understand its limits. We assume it can do anything because, why not? If it can do “a, b and c” why is “x, y, and z” impossible? He motivates this with a thought experiment where he brings Isaac Newton back from the dead and gives him an iPhone, without explaining anything about how it works. We simply show Newton what the iPhone can do, such as play a movie or some music, and display a book. Brooks points out that Newton, who was a genius and to whom we are indebted for many of the scientific discoveries which allowed the iPhone to be possible, would possibly assume (a) the iPhone could simply run forever without requiring a rechard and (b) that it could even turn things to gold. After all, why not? It can do so many other things that appear to be magic already. Brooks also points out that many of the individuals who make frightening claims about AI and how it will become transcendent any day, many of them do not even work in the field of AI. This implies they may be falling victim to the “magic” law. They do not see how hard it is to create the black box and they assume that anything is possible. But specialized AI is extremely hard. So how could it be on the verge of taking over the world?
Conclusion
At the risk of appearing to undermine my own arguments from before, I do think AI will advance significantly in the next fifty years. I also think our world is going to continue changing significantly. But I do not think the rate of change in AI will be exponential. And I don’t think new sentient beings are just around the corner. For the reasons already discussed, I think the AI expectations, specifically around AGI, are overblown and is driven largely by human factors and not facts. Instead, it seems more likely that AI will change us socially and politically first. The idea that governments could misuse massive amounts of data to undermine democracy and freedom in our world is more frightening because its likelihood is near certain, if history is any indicator. This is where journalists, AI experts, business leaders, and policy makers should be focusing their attention. The hope is that, through these and similar discussions, conversations around AI can be grounded in reality.
Feedback
Whether you loved the article, could care less, or hated it, I am open to constructive feedback. This is a journey for me, so if you know something I don’t (which is a near certainty) or you spot an error in what I’ve written, feel free to drop me a line at josh@ockm.net.
References
- BBC (2017). Google AI defeats human Go champion. BBC News. Retrieved from https://www.bbc.com/news/technology-40042581.
- Brooks, Rodney (September 2018). Rodney Brooks on Artificial Intelligence. EconTalks. Retrieved from http://www.econtalk.org/rodney-brooks-on-artificial-intelligence/.
- Brooks, Rodney (October 2017). The Seven Deadly Sins of AI Predictions. Technology Review. Retrieved from https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/.
- Brown, Jennings (2017). Why Everyone Is Hating on IBM Watson - Includingthe People Who Helped Make It. Gizmodo. Retrieved from https://gizmodo.com/why-everyone-is-hating-on-watson-including-the-people-w-1797510888
- History of Technology. https://historyoftechnologyif.weebly.com/information-age.html.
- How AI will Go Out of Control According to 52 Experts (CB Insights, February 2019). https://www.cbinsights.com/research/ai-threatens-humanity-expert-quotes/
- Hvistendahl, Mara. https://www.popsci.com/can-ai-destroy-humanity/
- IBM Italia. IBM Watson on Health. Retrieved from https://youtu.be/RjCL1lRPWew
- Jaklevic, Mary Chris. MD Anderson Cancer Center’s IBM Watson project fails, and so did the journalism related to it. Retrieved from https://www.healthnewsreview.org/2017/02/md-anderson-cancer-centers-ibm-watson-project-fails-journalism-related/.
- Kunze, Lars. Can We Stop the Academic AI Brain Drain? https://link.springer.com/article/10.1007/s13218-019-00577-2
- Landau, L. J. (1997). Penrose's Philosophical Error. Retrieved from https://web.archive.org/web/20160125125014/http://www.mth.kcl.ac.uk/~llandau/Homepage/Math/penrose.html
- Martin, Sean. Humanity’s days are NUMBERED and AI will cause mass extinction, warns Stephen Hawking.https://www.express.co.uk/news/science/875084/Stephen-Hawking-AI-destroy-humanity-end-of-the-world-artificial-intelligence
- Paulson, Steve (May 4, 2017). Roger Penrose on Why Consciousness Does Not Compute. Retrieved from http://nautil.us/issue/47/consciousness/roger-penrose-on-why-consciousness-does-not-compute
- Penrose, Sir Roger (December 2018). The Joe Rogan Experience #1216. Retrieved from https://www.youtube.com/watch?v=GEw0ePZUMHA.
- Reynolds, Emily (June 1, 2018). The agony of Sophia, the world's first robot citizen condemned to a lifeless career in marketing. Wired Magazine. Retrieved from https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics.
- Robson, David (November 2016). We’ve got human intelligence all wrong. BBC Future. Retrieved from http://www.bbc.com/future/story/20161108-weve-got-human-intelligence-all-wrong
- Starr, Michelle (March 20, 2016). Crazy-eyed Robot Wants a Family -- and to Destroy All Humans. Retrieved from https://www.cnet.com/news/crazy-eyed-robot-wants-a-family-and-to-destroy-all-humans/.
- Sulleyman Aatif. AI is Highly Likely to Destroy Humans, Elon Musk Warns. https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-artificial-intelligence-openai-neuralink-ai-warning-a8074821.html
- Tech Insider (December 28, 2017). We Talked To Sophia — The AI Robot That Once Said It Would 'Destroy Humans'. Retrieved from https://youtu.be/78-1MlkxyqI
- Topol, Eric (2019). Eric Topol on Deep Medicine (6:23-7:03). EconTalk.org. The Library of Economics and Liberty. Retrieved from http://www.econtalk.org/eric-topol-on-deep-medicine/.
- Web, Amy (2018). Amy Webb on Artificial Intelligence, Humanity, and the Big Nine (1:30-2:25). EconTalk.org. The Library of Economics and Liberty. Retrieved from http://www.econtalk.org/amy-webb-on-artificial-intelligence-humanity-and-the-big-nine/.
- Zuckerman, Phil (October 20, 2015). How Many Atheists Are There? Hundreds of millions. Psychology Today. Retrieved from https://www.psychologytoday.com/us/blog/the-secular-life/201510/how-many-atheists-are-there.