|
What is Artificial General Intelligence (AGI)? by Don Simborg
By Guest Contributor Don Simborg
Author of The Fourth Great Transformation, Don Simborg, explains what Artificial General Intelligence is (AGI) and what The Fourth Great Transformation could mean for us all and a new human species.
Artificial intelligence (AI) is all the rage today. It permeates our lives in ways obvious to us and in ways not so obvious. Some obvious ways are in our search engines, game playing, Siri, Alexa, driving cars, ad selection, and speech recognition. Some not-so-obvious ways are finding new patterns in big data research, solving complex mathematical equations, creating, and defeating encryption methodologies, and designing the next generation weapons.
Yet AI remains artificial, not human. No AI computer has yet passed the Turing Test or the Blois Test. (See discussion blog of November 2, 2020). AI far exceeds human intelligence in some cognitive tasks like calculating and game playing. AI even exceeds humans in cognitive tasks requiring extensive human training like interpreting certain x-rays and pathology slides. Generally, its achievements, while amazing, are still somewhat narrow. They are getting broader particularly in hitherto exclusively human capabilities like face recognition. But we have not yet achieved what is called artificial general intelligence, or AGI.
AGI is defined as the point where a computer’s intelligence is equal to and indistinguishable from human intelligence. It defines a point toward which AI is supposedly heading. There is considerable debate as to how long it will take to reach AGI, and even more debate whether that will be a good thing or an existential threat to humans. (See discussion blog of October 23, 2020).
Here are my conclusions:
- AGI will never be achieved.
- The existential threat still exists.
AGI WILL NEVER BE ACHIEVED FOR TWO REASONS
First, we will never agree on a working definition of AGI that could be measured unambiguously. Second, we don’t really want to achieve it and therefore won’t really try.
We cannot define AGI because we cannot define human intelligence—or more precisely, our definitions will leave too much room for ambiguity in measurement. Intelligence is generally defined as the ability to reason, understand and learn. AI computers already do this depending on how one defines these terms. As discussed in these books, more precise definitions attempt to identify those unique characteristics of human intelligence including the ability to create and communicate memes, reflective consciousness, fictive thinking and communicating, common sense and shared intentionality. Even if we could define all of these characteristics, it seems inconceivable we will agree on a method of measuring their combined capabilities in any unambiguous manner. It is even more inconceivable that we will ever achieve all of those characteristics in a computer.
MORE IMPORTANTLY, HERE IS WHY WE WON’T TRY
Human intelligence includes many functions that don’t seem necessary to achieve the future goals of AI. The human brain has evolved over millions of years and includes functions that are tightly integrated into our cognitive behaviors that seem unnecessary and even unwanted to build into future AI systems. Emotions, dreams, sleep, control of breathing and heart rate, monitoring and control of hormone levels, and many other physiological functions are inextricably built into all brain activities. Do we need an angry computer? Why would we waste time trying to include those functions in future AIs? Emulating human intelligence is not the correct goal. Human intelligence makes a lot of mistakes because of human biases. Our goal is to improve on human intelligence—not emulate it.
The more likely path to future AI is NOT to fully emulate the human brain, but rather to model the brain where that is helpful—like the parallel processing of deep neural networks, and self-learning—but create non-human computer-based approaches to problem solving, learning, pattern recognition and other useful functions that will assist humans. The end result will not be an AI that is indistinguishable from human intelligence by any test. Yet is will still be “smarter” in many obvious and measurable ways. The Turing Test and Blois Test are irrelevant.
IF THAT IS TRUE, WHY WOULD AI STILL BE AN EXISTENTIAL THREAT?
The concerns of people like Elon Musk, Steven Hawking, Nick Bostrom and many other eminent scientists is that there will come a time when the self-learning and self-programming AI systems will reach a “cross-over” point where they will rapidly exceed human intelligence and become what is called artificial superintelligence or ASI. The fear is that we will then lose control of an ASI in unpredictable ways. One possibility is that an ASI will treat humans similarly to the way we treat other species and eliminate us either intentionally or unintentionally as we eliminate thousands and even millions of other species today.
There is no reason that a future ASI must go through an AGI stage to achieve this potential threat. It could still be uncontrollable by us, unfriendly to us and never have passed the Turning Test or any other measure of human intelligence.
If you’d like to find out more information about The Fourth Great Transformation, please see Don’s previous blogs.
ABOUT THE AUTHOR
DON SIMBORG is a graduate of the Johns Hopkins School of Medicine, and former faculty member at both the Johns Hopkins and University of California San Francisco schools of medicine. He is the founder of two electronic medical records companies and is a founding member of the American College of Medical Informatics. He served on the Computer Science and Telecommunications Board of the National Academics of Science. He and his wife, Madeleine, have two children and four grandchildren and live in California.
LinkedIn: https://www.linkedin.com/in/don-simborg-293678/
Suggested Reading
A new human species will soon come to co-exist with us. This new species, ‘Homo nouveau’, will be created using artificial intelligence and genetic engineering; both important tools which are in their infancy. Not only are the science and technology relatively new, but their implications in the mind of the general public are also only just beginning to enter our collective consciousness. This book expands on the research done for the author’s pervious book, What Comes After Homo Sapiens?
Written by a medical professional and independent consultant to healthcare IT companies, The Fourth Great Transformation explores the questions of what this new species will look like, how we as humans will get along with them, and the potential threats and opportunities that will come along with genetically modified humans.