Understanding ‘Hacking Humans’ and Human Replacement by Don Simborg

By Guest Contributor Don Simborg

Author of The Fourth Great Transformation, Don Simborg, explains what ‘Human Hacking’ and human replacement is and how it will impact the future and a potential new human species.

There are two potential impacts of artificial intelligence (AI): intelligence augmentation (IA) and human replacement (HR). The former is good, and the latter is mixed. Examples of IA include all of the current aids to medical diagnosis. Examples of HR are robots used in manufacturing and ATMs. AI will surely put some people out of work. Will the net effect be higher unemployment, or will there be new types of jobs created to offset this? There was a similar fear during the Industrial Revolution, which led to the Luddite revolt – with English textile workers rebelling against mechanization and new labor practices – in the early 1800s. In the short run, those fears were legitimate. Replacing textile and related workers with automation was disruptive, some people lost jobs, and wealth inequality and poverty did increase. In the long run, though, automation has had a positive effect on employment. Human physical abilities have been improved, jobs and productivity have increased, and work has generally become less burdensome. Automation has created new job categories, rather than just causing net job loss. A century ago, agriculture and manufacturing accounted for 70% of all jobs. Largely because of automation, that is now down to 12% in the U.S. And yet, we maintain the same levels of employment.

AI is different. It improves our mental abilities rather than replacing our muscle. So far it has not had a negative effect on employment. But it is still very early. Once AI systems drive automobiles, buses, and trucks, where will all the drivers work? Once AI software can diagnose illness and prescribe treatments, write newspaper articles, and law briefs, handle all bank transactions, teach students…well, you get the idea. The outcomes are difficult to predict. The good news is that ATM machines have not reduced the number of bank tellers.(1) Likewise, AI could improve jobs by relieving the repetitive and boring aspects and allow people to provide better and more personalized services.

Who is at the greatest risk of human replacement from AI: those with jobs requiring high cognitive skills or lower cognitive skills? As the futurist Byron Reese points out in his book, The Fourth Age, it is probably the former. It is actually easier to train AI software to interpret x-rays, for example, than to train an AI robot to fold your laundry or be a nursing aide. That is, since a highly educated radiologist’s work is information-based, it is easier to emulate than many lower-skilled jobs. A study by the Brookings Institute in November 2019 predicted that AI will have a five-fold greater negative impact on people with a college education than those with a high school education.(2) And it suggested people in higher-wage occupations will be much more impacted than those in lower-wage jobs.

In the worst case, many economists are predicting massive overall unemployment. One estimate is that 38% of all U.S. jobs will be lost to AI by the early 2030s.(3) A survey of AI experts predicts a 50% probability that AI could outperform humans in all tasks by 2060.(4) Even if these percentages are relatively lower, the potential disruption is enormous.

How will society adjust? Will we need a guaranteed minimum income system, or will we create the ultimate oligarchy controlled by AI system owners? What if oligarchs own all the robots, but nobody can afford to buy the products? Will we be set free to pursue our intellectual interests, or will we live in chaos, boredom and misery?(5) The outcomes are not known and are still to be determined by our culture and society. Our future remains under human control.(6) Unfortunately, these issues don’t seem to be part of our current political discourse.

 

‘HACKING HUMANS’

Russia manipulated the 2016 U.S. election with targeted messaging over the internet. Using data from Facebook and elsewhere, it was able to tailor messages down to the individual level to influence opinion or foment discord. Historian Yuval Noah Harari describes this as “child’s play” compared to what to expect in the future. In a lecture at the Ecole Polytechnique Federale de Lausanne in Switzerland, Harari described the ability of AI to “hack humans.”(7) He states that since all human decisions are driven by brain algorithms, the combination of biological knowledge, computing power and data will allow us to be hacked just like any other algorithm.

This hacking would be related to how the brain makes decisions or exercises free will. We don’t understand either human decision-making or free will, or even if the latter exists. The deep, convoluted network of neurons in our neocortex operates largely below our consciousness. Neuroscience studies have shown that most decisions we make are done subconsciously, and those we do become aware of begin execution prior to our awareness. Without being able to fully understand those processes, Harari states that someone with the right information about an individual will be able to use AI-based messaging to alter that person’s decisions. This will be done painlessly and surreptitiously.

 

HOW DOES THIS HAPPEN?

Just look at the amount of data we already generate about ourselves on the internet. Every click we make on a web page or move we make with our cell phone feeds a database. These data show what we buy, what we like on Twitter, who we follow on Facebook, where we go, what web sites we view, with whom we communicate, which charities and politicians we support, and thousands of other data points about our medical, education, financial, job and family histories. Our conscious thinking process can handle about six or seven variables at any one time. AI can handle thousands. By applying machine learning on the vast database available on any individual, patterns of decision-making emerge that we don’t even understand ourselves. In essence, AI can know us better than we know ourselves. And, with that knowledge, AI will know exactly the right message to send to influence our decisions. Our brains will be hacked. The big questions are why and by whom?

“Facebook Discovers A.I. Being Used to Disinform” was a headline in the business section of The New York Times on December 21, 2019. The threat is not only that AI can learn enough about you to hack your brain, but it can insert totally distorted and untruthful information. That information could be so realistic and believable that it would take another AI computer to even discover the hack. AI will be able to create images of people who don’t exist, and who will report events that never happened, with authentic-looking documentation. It will seamlessly superimpose the images of real people in unreal situations and depict them saying untrue things. Indeed, Russian’s meddling in the 2016 election will pale in comparison.

And all of this will happen long before we reach artificial general intelligence – the hypothetical point at which AI equals human intelligence.

 

SO, CAN WE PREVENT AI FROM BECOMING A THREAT?

There is a new movement called explainable AI that will attempt to build trust and accountability into AI systems. DARPA (the Defense Advanced Research Projects Agency) is the U.S. Department of Defense organization that gave us the pre-internet and many other technologies. Governments, particularly military agencies, are some of the largest investors in AI development. AI is being used increasingly in weapons systems. For example, AI software systems can now beat the best-trained fighter jet pilots in simulated dogfights.(8) It is unacceptable, however, to think of bomb-carrying drones or other weapons governed by AI systems that could make decisions that are not fully understood and justified by the system to humans prior to execution of the decision. DARPA is working on a system it calls XAI (Explainable AI) to “Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.”(9) XAI will build into AI software an explanation interface about each decision so that a user can say:

“I understand why”

“I understand why not”

“I know when you succeed” “I know when you fail”

“I know when to trust you” “I know why you erred”

The intent is to make the public more willing to trust the XAI technology. In the private sector, the consulting firm Accenture is trying to transform the AI black box into an AI ‘glass box’ through its consulting practice and research labs.(10)

We’re not yet close to AGI, but our creations are becoming superhuman. We might not even know when these efforts have reached the Artificial Superintelligence (ASI) stage – that hypothetical point where AI exceeds human intelligence. The fear is that maybe one day an ASI will test humans to see if we can pass its form of the ASI Turing Test. Perhaps, in the future, an ASI will question whether humans meet its definition of intelligent life.

These are not meant to be science fiction scenarios. They are real possible outcomes being considered by Nobel laureates, scientists, and AI experts.

 

THE BIG QUESTION IS, HOW CAN WE PROTECT OURSELVES? CAN WE CREATE TRUSTWORTHY AI IN GENERAL, AND HOW CAN WE ENSURE THTA AN ASI IS HUMAN-FRIENDLY?

Nick Bostrom, a professor at the University of Oxford, has studied and written extensively about these problems. He has described the many different ways AI could evolve into an existential threat, and the possible preventive measures we could build into our systems. At the present, though, there do not seem to be any sure-fire answers.(11)

Should governments legislate against software that could lead to ASI? Could there be some kind of global cooperation to prevent any existential threats? If our attempts to reduce global warming are any indication, this effort seems likely to fail. Which country would want to give up its potential competitive advantage in developing AI? How would we gain enough trust to work together? How would progress be monitored? The problems, issues and dangers are many and murky.

 

NUMEROUS ORGANIZATIONS ARE DEDICATED TO KEEPING AI FRIENDLY TO HUMANS

If they don’t succeed, Homo sapiens could be left behind, or eliminated altogether, before we even have a chance to create Homo nouveau. There would be no Fourth Great Transformation. However, neither AGI nor ASI is required to create Homo nouveau. We will have the tools necessary to create Homo nouveau well before AGI is achieved (if it ever is) using the IA capability of AI to improve our genetic engineering tools. My conclusion is that the technological advances described in the book will not prevent the Fourth Great Transformation, but rather enable it.

 

References and Footnotes:
  1. https://www.aei.org/economics/what-atms-bank-tellers-rise-robots-and-jobs/
  2. Muro, Mark, Whiton, Jacob and Maxim, Robert. “What Jobs are Affected by AI.” Metropolitan Policy Program at Brookings, November 2019.
  3. Berriman, Richard, Hawksworth, John. “Will Robots Steal our Jobs?” Price-Waterhouse UK Economic Outlook, March 2017
  4. Grace, Katja, Salvatier, John, Dafoe, Allan, et. al., “When Will AI Exceed Human Performance? Evidence from AI Experts.” May 2018.
  5. For a fascinating read of a fictional (hopefully) view of a possible dystopian future of AI based on our current non-fictional understanding of AI see the book Burn-In by P.W. Singer and August Cole.
  6. The IEEE (Institute of Electrical and Electronic Engineers) is the largest technical professional organization in the world. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has produced a report addressing these issues: Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017.
  7. https://www.youtube.com/watch?v=xhpXU0x5894
  8. https://www.janes.com/defence-news/news-detail/heron-systems-ai-defeats-human-pilot-in-us-darpa-alphadogfight-trials
  9. Turek, Matt. “Explainable Artificial Intelligence (XAI).” DARPA.
  10. “Understanding Machines: Explainable AI.” Accenture Labs.
  11. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press, 2014.

If you’d like to find out more information about The Fourth Great Transformation, please see Don’s previous blogs.


 ABOUT THE AUTHOR

DON SIMBORG is a graduate of the Johns Hopkins School of Medicine, and former faculty member at both the Johns Hopkins and University of California San Francisco schools of medicine. He is the founder of two electronic medical records companies and is a founding member of the American College of Medical Informatics. He served on the Computer Science and Telecommunications Board of the National Academics of Science. He and his wife, Madeleine, have two children and four grandchildren and live in California.

LinkedIn: https://www.linkedin.com/in/don-simborg-293678/

 


Suggested Reading

A new human species will soon come to co-exist with us. This new species, ‘Homo nouveau, will be created using artificial intelligence and genetic engineering; both important tools which are in their infancy. Not only are the science and technology relatively new, but their implications in the mind of the general public are also only just beginning to enter our collective consciousness. This book expands on the research done for the author’s pervious book, What Comes After Homo Sapiens?

Written by a medical professional and independent consultant to healthcare IT companies, The Fourth Great Transformation explores the questions of what this new species will look like, how we as humans will get along with them, and the potential threats and opportunities that will come along with genetically modified humans.

More information