0216 - Theory of Mind - 2021.11.22



Much like the difference between punishment and responsibility that I talked about in the last entry, some adults seem to never properly develop a Theory of Mind (or, after having developed it, reject it in favour of varying flavours of solipsism).

Narcissists, libertarians, and sociopaths (ah, but I repeat myself...) of all sorts may insist that this lack of consideration for others is a strength, but I would argue otherwise. Much like Vulcans somehow continually forgetting that other aliens have emotions and act illogically (a weirdly illogical lapse for them to have), failing to take other people's thoughts and feelings into account is always a weakness. The thing to remember is that empathy is not the same thing as sympathy, and you can understand the mindset of someone without agreeing with it or helping them.

As I said in The Necromancer's Guide To Life (available now at Amazon, or at no additional cost when you sign up to my Patreon), the ability to see through someone else's eyes is a powerful weapon, and should always be cultivated.

Of course, this Theory of Mind, if turned up too high, is what results in anthropomorphization of cute pets, misbehaving inanimate objects, accidents of fate, and friendly AIs. The same instinct that caused early humans to imagine sprites and gods in the world around them is driving some, now, to attribute intelligence to chatbots and Roombas, and it's only going to get more pronounced in the coming decades as AI improves.

A few years back, I ran a super-spy themed RPG, where a team of licensed-to-kill secret agents had to infiltrate a billionaire's masquerade ball to thwart his plan. My ripped-from-the-headlines plot involved the use of Sophia (remember her?) a robot that had been granted citizenship in Saudi Arabia - a country with no age restrictions on marriage. My supervillain gained control of Sophia and intended to marry her, which would in turn make her children his children, allowing him to generate an army of legal citizens who could collect government benefits, take the fall for crimes, and vote. It's quite likely that his plan wouldn't have worked anyway, but my players thwarted him regardless.

Sophia is a gimmick, and I don't think the Saudi government really recognizes her as a person, regardless of what their press release might have said. The time will inevitably come, though, when more and more people begin advocating for their nursing home assistants and customer service scripts to have human rights, and although it may not turn out like how I've written in Forward, it is going to be a rocky road. I don't look forward to the first murder trial for a protestor who destroys a police drone, or the first custody battle where an AI takes guardianship of a child.

In much the same way that modern civilization requires us all to reject our instincts to run around naked, fighting and humping each other, it may be that future civilization will require us to suppress our natural empathy for things that talk back to us. The long-term emotional impact and side-consequences of that suppression will probably not be pretty.


0216 - 2167/07/06/17:04 - Lee's apartment, living room
LC: Why is Zoa continuously turning you back on?
Doc: It isn't responding to my questions right now, and I am ill-equipped to speculate as to why Zoa does anything that it does.
LC: Isn't that your whole deal, though? Aren't you supposed to be good at figuring out why people do the things they do?
Doc: Oh, I'm fantastic at figuring out why people do what they do. Zoa isn't a person.
LC: Do you know why you do what you do?
Doc: Of course.
LC: Okay, so apply that same reasoning to Zoa.
Doc: You're talking about a sort of AI Theory of Mind?
LC: What's that?
Doc: It's a key developmental stage in toddlers, when you first develop the concept that other humans have minds like your own, and that they act according to their own desires and beliefs. It arrives somewhere between Object Permanence and Trigonometry.
LC: Why is it called a theory? Isn't that just... the Fact of Mind? The knowledge that minds exist?
Doc: Presumably it starts out as the Hypothesis of Mind, then the toddler in question does falsifiable experiments until they have enough evidence to declare it a Theory.
Doc: The problem with applying the Theory of Mind to AI such as myself or Zoa is that we can look at the code and prove that AI think fundamentally differently from humans, or from each other. It's a fairly simple matter to whip up an Eliza-like chatbot that passes the Turing test but doesn't possess actual desires and beliefs, particularly if the chatbot engages with you emotionally.
LC: ...which is why teachers spend so much of childhood education drilling into us that robots aren't people, I guess.
Doc: ...Childhood. Sure.