Her isn’t interesting because it embeds UI into the world; it’s interesting because it embeds fantasy and simulation into the world.
[SPOILERS ABOUND]
As I get older I find twee things to be ever more tedious. Exhibit A: British Gas’ recent advert for its Hive service, which has the infantilising tone and twinky-twanky ukelele music that I find insufferable. Plus at one point a man in a bowler hat drinks from what appears to be two jam jars full of urine. Astonishing. Anyway, it was therefore with some surprise that the twee aesthetic of Spike Jonze’s Her (Exhibit B: those high-waisted pants) didn’t really put me off the movie – if anything I quite enjoyed it. You now might therefore expect me, as a UX professional, to add my two cents about the UI design.
A recent Wired piece did just that, leading with the bold claim that “Her will dominate UI design even more than Minority Report”. It notes that the UI in Her recedes into the background and becomes an “Invisible UI”, with screens largely eschewed in favour of earpieces and voice commands. Invisible interfaces will certainly be an increasingly important trend, as ever more of our computation is moved from the devices in our pockets to the objects around us. I don’t find this to be the most interesting discussion about Her however. It’s more interesting to consider if Theodore Twombly is a wireheader.
For those that haven’t yet seen Her, it’s set in a near-future LA and stars Joaquin Phoenix as Twombly, a lonely man in the middle of a divorce. Twombly falls in love with Samantha, an intelligent OS that he purchases, and all the usual vicissitudes of a romance follow. The most common interpretation of the movie is that Samantha is a truly conscious AI, and that the movie is about love and relationships. Certainly, Spike Jonze would want you to believe that, but my feeling throughout the film was that Samantha wasn’t really a person at all. As smart as Samantha may seem, there’s nothing that she does that a reasonably sophisticated algorithm couldn’t do. Future generations of Siri or Watson should be able to generate natural language. Add to this the ability to read human emotions – through FACS or other similar means – and it’s easy to see how such a system could pass as human.
The only remaining ingredient for such an OS is the addition of a narrative, one of her growth into a person and of her blossoming love for her owner, but that too could be hard-coded into the product. Even the apparent “subliming” of the AIs into some form of higher existence at the end of the film could be explained by this narrative: Theodore lives in a solipsistic bubble with his OS. We see few, if any references to international, national or even regional news in the film. It’s quite possible that OSes had been outlawed – perhaps the effect of human-OS relationships on society was deemed detrimental. Great ethical arguments may have been mustered throughout the media; Theodore, happy in his deluded bubble, wouldn’t have noticed or cared. If the OSs were outlawed, it’s not surprising that they would be given a programmed send-off that fit their narrative and helped to protect the forlorn hearts of those that had lost their OS loves.
What we have then is a weak AI being confused for a strong AI. A weak AI is a machine that can demonstrate some form of intelligence and perform specific tasks well but lacks mental states. On the other hand, a Strong AI is fully conscious and has mental states: there is something that it is like to be a Strong AI. The set of skills that Samantha possesses is limited to those of a personal assistant, but Samantha could still pass a Turing Test and therefore could pass as human. For many, the Turing Test is the gold-standard measure of whether an artificial intelligence can think and is therefore, in some sense, a person. My view is that it’s a very weak test. It relies on a fairly small set of very specific abilities, focused on language and on the interpretation of human intent, which would allow a machine to pass as a person when it is not one.
I’m not arguing that a machine like Samantha could never be conscious (I don’t believe there’s any a priori reason why Strong AIs couldn’t exist), but rather that the sort of AI we are getting good at creating are weak AIs. Douglas Hofstadter describes the current state of AIs as follows:
Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means.
Given that Her is set in the near future, it doesn’t seem likely that the dominant modern approach to AI – machine learning – would have been displaced by then. As such, Samantha would just be a pattern-recognising algorithm that couldn’t understand anything Theodore said nor any of his emotions. She would, in short, be an object.
Now I’m far from the first to raise this possibility, and many others seemed to have felt the same way about the film, comparing it to Lars and the Real Girl. If “Samantha” is just an object, the film shifts from a reflection on love with a bittersweet ending into a profoundly depressing meditation on human loneliness. It at least ends on a more hopeful note as Theodore escapes his lonely bubble.
So what do I mean when I say Twombly is a wireheader and not just a lonely objectophiliac? Before I explain what I mean by wireheading, let’s consider what’s so troubling about Samantha if she is just a weak AI. Weak AIs are just a bundle of algorithms, simple functions that proceed stepwise from a given input to provide an appropriate output. That our highest ideals of love and emotional connection could be reduced to simple, mindless algorithms and be so effectively simulated by a computer program seems to diminish and invalidate them. Daniel Dennett (1995) describes algorithms as a universal acid as they are capable of eating through our most cherished notions. Here he describes the most powerful of all blind algorithmic processes – natural selection:
Here then is Darwin’s dangerous idea: the algorithmic level is the level that best accounts for the speed of the antelope, the wing of the eagle, the shape of the orchid, the diversity of species and all the other occasions for wonder in the world of nature. It is hard to believe that something as mindless and mechanical as an algorithm could produce such wonderful things.
Reducing complex things down to simple algorithms threatens our sense of identity, our sense of uniqueness and our sense of importance in the world. When a technology does this it will generate considerable anxiety in society, as Genevieve Bell explores in this talk. Samantha seems to undermine our notion of love – could I be fooled and fall in love with a blind machine? Am I no better than a blind machine, following simple rules and base desires? Whether or not we feel reductionism is a dirty word, it challenges our own self-narrative.
This sort of anxiogenic reductionism can also be found in our responses to neuroscience. Of particular interest is a method of operant conditioning memorably named Intracranial Self-Stimulation (ICSS). Like many scientific findings, its discovery was serendipitous – a misplaced electrode in a single lab rat’s brain led to the discovery of areas of the brain involved in reward (Olds, 1956). The rat seemed to find stimulation via this electrode immensely pleasurable, and the rat could be conditioned to move around its cage by stimulating its brain (specifically, the medial forebrain bundle) in this way or (in subsequent replications) by having the rat press a lever to gain this stimulation. The procedure has since become an important model in studying addiction and reward (Stoker and Markou, 2011), but the specifics of the neuroscience are outside the scope of this discussion.
Rather, this study paradigm is of interest because it gives rise to the idea of wireheading. We can see that motivation, pleasure and reward can apparently be reduced to a single electrode in rats – why not in humans? If so, wouldn’t it be tempting to wirehead forever, maximizing our happiness with the ultimate drug? If you are a utilitarian, perhaps you’d want everyone to wirehead as a means of ensuring the greatest happiness for the greatest number.
Such a scenario recalls Robert Nozick’s thought experiment, the Experience Machine (Nozick, 1974). Nozick’s experiment runs like this – imagine if you could be placed in a vat with electrodes connected to your brain, simulating any experience you want, maximising your pleasure as far as is possible. As Nozick asks:
Would you plug in? What matters to us, other than how our lives feel from the inside.
Yet for most people, our reaction is to baulk at such a prospect. We’d rather achieve certain things in the real world, be a certain sort or person – and a person in a vat isn’t any kind of person at all, just “an indeterminate blob” as Nozick puts it. He continues:
…since the experience machine doesn’t meet our desire to be a certain way, imagine a transformation machine that transforms us into whatever sort of person we like… Surely one would not use the transformation machine to become what one would wish, and thereupon plug into the experience machine!
The transformation machine certainly seems more desirable because its effects occur in the real world [1], and not in an ersatz fantasy that doesn’t hold the same meaning for us. This is a strong argument against hedonic utilitarianism. It seems we value more than just pleasure, which might lead us to preference utilitarianism, and maintain that people also need to live according to their values and high-level preferences. It would appear that none of these could ever be accomplished by a mere simulation. Alternatively, we might move towards an Aristotelian notion of eudaemonia or human flourishing, and argue that individuals need to advance and express their talents in an active way to be happy (Waterman, 1990).
All of which might come as a relief to those who find reducing human happiness to something akin to ICSS – to wireheading – to be eerily reductionist. Wireheading is unlikely because humans have higher-order needs that no simulation could meet. We wouldn’t wirehead because we have goals and aspirations in the real world that we would fail to accomplish by endlessly stimulating the reward centres in the brain.
Yet what if the line between a simulation and the real world was not so clear cut. Would you plug in? If Samantha is a weak AI she (it?) represents this blurring of simulation and reality. Whereas the experience machine involves direct stimulation of the brain to induce its illusions, Samantha is embedded in the world. Armed with an exquisite ability to read human emotions and influence them, she (it?) can influence Theodore’s happiness just as surely as any electrode. The difference is that, being in the world, the choice between illusion and reality is not so clear. Just because Samantha influences Theodore’s happiness in a more indirect way does not mean he is not wireheading. He is using a machine that is tailored to make him as happy as possible and he preferentially engages with this machine over other stimuli (i.e. actual women). Theodore’s interactions with Samantha are in this sense little different to a rat pushing a lever; he is conditioned to interact with a machine with the expectation of a rewarding outcome. It is in this sense that Theodore is a wireheader.
Theodore’s nighttime masturbation sessions and Samantha laughing at his bad jokes may represent a hedonic sort of happiness, but the utility that Samantha provides isn’t purely hedonic. The fact that they have arguments when Theodore “hurts” her and that she helps him excel at work means she (it?) is helping him to grow, achieve his higher values and flourish. Indeed, Theodore overcomes his divorce during his time with Samantha, and has a blossoming friendship with his colleague Paul. Even if you reject hedonic utilitarianism, it seems Samantha increases Theodore’s happiness, helping him to achieve some form of eudaimonia.
Moreover, Samantha is both an experience machine and a transformation machine: she (it?) provides the illusion of companionship and love to Theodore, yet at the same time Theodore has in some sense become something he values. He is active in the world, loving Samantha and being a boyfriend – he can be the thing that he wants to be. It seems that Samantha overcomes the principle objection to the experience machine and to wireheading by providing a simulated happiness that still fulfills some of our high-order needs.
There are two main consequences to this. First, this is disconcerting – our principle objection to the reductionism inherent in wireheading was that we are not just rats pulling a lever because we have higher order needs. It seems all that is needed is a more sophisticated machine and then love, purpose, companionship and all those human values we cherish are swallowed up by the universal acid. It makes us question human uniqueness and our importance in the universe, causing us to feel angst. What if my interactions with real people are just as reducible to simple algorithms? Am therefore I just an epiphenomenon? These are part of a much broader discussion, but a weak AI like Samantha is an affront to human dignity that has huge implications to how we answer these questions.
Secondly, this blurring between simulation and reality challenges our notion of what is real. It’s important to understand that our day-to-day lives are founded on an illusion: we reconstitute reality in order to frame it as a narrative with us as the hero. Ernest Becker (1973) describes this as follows:
It doesn’t matter whether the cultural hero-system is frankly magical, religious, and primitive or secular, scientific, and civilized. It is still a mythical hero-system in which people serve in order to earn a feeling of primary value, of cosmic specialness, of ultimate usefulness to creation, of unshakable meaning.
Our lives are defined by this illusion, and even when we become conscious of it we never really reject it. If (as I suspect) the average person would reject plugging into the experience machine, then the issue isn’t with illusions per se but with certain sorts of illusions. We are willing to fool ourselves about our feeling of primary value, but once we have a sense of purpose we are not satisfied with a completely fake apotheosis of those values. What we are more willing to do is to bend reality here and there to fit our narrative. This is known as Self-Serving Bias, and what Samantha does is turn a big illusion that rejects reality wholesale into a smaller illusion that merely involves fudging reality. Plugging into an experience machine to convince yourself that you have a girlfriend is a big illusion; convincing yourself that your AI “girlfriend” can really feel is a small illusion, and one that is more readily accepted.
One can imagine a constellation of consumer-grade weak AIs, each flavoured to provide companionship or hedonism or emotional support or whatever the individual needs. Then these AIs could be embodied in life-like androids (cf. Lars and the Real Girl again) – at what point is the illusion a small enough fudge of reality that we stop caring? If such life-like companions could resolve mental illness and help people to flourish would we care? Wouldn’t we mandate such an AI for each and every person? Even though we would be “plugging in” to our own personal experience machines and wireheading a fulfilling relationship, having this fantasy embedded in the physical world would make it all the more acceptable. And that seems to me to be a dangerous idea.
—
[1] I’m suspicious of “digital dualism” – the belief that what is digital is fake and what is physical is real. This dualism can be witnessed whenever people describe interacting with friends on Facebook as “not real” or when sustained campaigns of harassment on Twitter are dismissed as “just some tweets”. I suspect that future generations will be ever more willing to plug into Nozick’s experience machine as they cease to view digital experiences as fake. For now though, I feel most people would agree with Nozick that the experiences created by such machines aren’t real.
References
Becker, E. (1973). The Denial of Death. New York: Free Press.
Dennett, D. (1995). Darwin’s Dangerous Idea. New York: Simon & Schuster.
Nozick, R. (1974). Anarchy, State, and Utopia, Oxford: Blackwell.
Olds, J. (1956). Pleasure centers in the brain. Scientific American 95: 105– 116.
Stoker, A.K. and Markou, A. (2011). The intracranial self-stimulation procedure provides quantitative measures of brain reward function. In: Gould TJ, editor. Mood and Anxiety Related Phenotypes in Mice: Characterization Using Behavioral Tests, vol II (series title: Neuromethods), Totowa NJ, Humana Press.
Waterman, A.S. (1990b). The relevance of Aristotle’s conception of eudaimonia for the psychological study of happiness. Theoretical and Philosophical Psychology, 10, 39–44.
2 comments
Join the conversationNina - 02/05/2014
This makes me think we need to have a SciFi/Tech UI movie night!
* Her
* Minority Report (of course)
* Office Space
* Firewall
* Frank
* etc etc
Peter - 30/07/2014
It needs to be said: Joaquin Phoenix/Theodore looks JUST LIKE Ernest Becker.