Artificial Intelligence (AI) makes many claims, some quite futuristic, others just around the corner. Somewhere in the middle lies the prediction of human behavior, with the attendant claim that if people are predictable, this could be the future of well-being.
To predict when someone is going to get angry, sad, afraid, or tense is already well within reach. AI is developing readouts of muscle activity and related bodily responses that indicate what the brain is going to do. Going a step further, at the MIT Media Lab they’ve taken enormous steps into translating thoughts—i.e., words in our heads—into signature brain signals. These signals can be digitized, and suddenly, a thought in your head can be sent to Google’s search engine via Wi-Fi, allowing you to search the Internet simply by thinking.
If you put these breakthroughs together, a new model of human behavior emerges, one based on predictability and reading the signals originating in the brain that attend predictable behaviors. AI experimenters get very excited about the notion that the brain, and the behavior it triggers, can be mathematically reduced to equations that in essence turn people into a complex of algorithms. The excitement is justified, because anything that can be expressed logically is understandable in computer language.
Even though a computer cannot fall in love and arguably could never grasp any emotion, positive or negative, if a certain muscle response triggered by the brain gives a 75% probability that you are about to fall in love, then match.com can be perfected—compatibility will be a numbers game.
Let’s say that AI’s dreams come true in the future. Would it be ethical to plug the brains of criminals into a Wi-Fi network that predicts the likelihood of a crime being committed, so that the police can head it off at the pass? That was the premise of Steven Spielberg’s movie, Minority Report, and in real life we are close enough to science fiction that prisons are working with predictability models to judge which inmates are safer to parole.
As soon as such a possibility is raised, the specter of Brave New World rises, along with the robotic behavior of North Koreans. Mind control is only a step away from mind reading. None of us wants our free will taken away, even if we would behave like happy people. We assume that North Koreans aren’t robots when they aren’t under threat of reprisal, and this is true. Apparently, the American sitcom Friends has become a cult in North Korea, and despite the threat if imprisonment, tapes of Friends episodes are hot on the black market and constitute a forbidden pleasure for North Koreans.
But let’s go a step farther. What if a computer could figure out the algorithm of specific behavior that you, an average citizen, follows. Much unhappiness is caused by unconscious behavior that is totally predictable, and self-awareness is a rare commodity. If a computer knew you better than you know yourself, it could detect all the ways you make yourself unhappy, and then set out to improve your well-being.
There are lots of ways this might happen. A drug could change your brain chemistry or make your muscles relax. Biofeedback could train your brain to abandon certain self-defeating pathways and build better pathways in their place. Schools and training labs could teach you to recognize when you are about to feel depressed or anxious and then give you meditations that abort the depression and anxiety at a very early stage. The field of bio-manipulation could conceivably end the worst of human suffering, which is mental.
The bottom line right now is that AI plays both sides of the street. While claiming that body-mind responses can be predicted, digitized, and used for all kinds of healing, from repairing spinal injuries to teaching autistic children how to change their facial expressions (the notion being that if the child adopts normal expressions in place of the typical blank autistic mask, the range of the child’s emotions will become more normal at the level of the brain). Simple but profound behavioral techniques such having doctors smile at their patients and touch them reassuringly on the shoulder seem promising in reducing patient anxiety and complaints.
The other side of the street is the claim that “of course” people aren’t going to be turned into robots by AI. But how is the mind to be neatly divided into the trainable part (deterministic) and the creative, liberated part (free will)? If I can be plugged into a device that predictably improves my mood, transforming me from sad and lonely to a happy camper, should I do it? The argument against bio-manipulation is hard to pin down, but not because a future Big Brother is going to turn us into robots.
The problem is that every aspect of mind and body works in a complex fashion with every other aspect. If you “improve” a person’s mood, for example, you might strip away the benefits of anxiety. One marked benefit is the phase that artists and problem solvers go through known as “anxious searching,” where the mind worries over a painting, poem, or difficult problem until the answer emerges. Then the anxiety has served its purpose, and the mind, having reached a creative solution, is actually happier and more contented.
I’ve only scratched the surface of how AI can affect the mind but knowing what’s at stake is important. In future posts the discussion can go deeper. At the moment, there’s no doubt that AI finds itself at the troubled junction point of neuroscience, big pharma, ethics, philosophy, and social engineering. The most basic questions like “Do we have free will?” lead to harder questions still, like, “Is free will hurting or harming us?” It’s likely that issues once consigned to religion and philosophy will loom as practical choices in everyday life. How things will ultimately turn out isn’t subject to an algorithm, even if human behavior is mostly predictable.