0285 - Sapience - 2023.03.20



The dirty little secret of all philosophy, all ethics - all thought, really - is that there's no such thing as a world without arbitrary a priori assumptions... and the definition of personhood is one of those assumptions.

No, really. Any halfway decent system of ethics must have a bright definitive dividing line between Person and Not-Person, and neither biology nor intelligence (whatever that is) provide us with such a line. As such, we cannot use ethics to determine where that line should be, any more than we can use a recipe to tell us which recipe we should make.

The situation in Forward is tenable, for now, because the non-personhood of AI is so commonly agreed upon (and, where it is not agreed upon, it is enforced). As AI develops, though, as machines become more and more people-like and people become more and more augmented, that line blurs, held in place solely by fiat and momentum.

The line cannot be allowed to shift not only because of economic or political inconvenience, but because it would, retroactively, make everyone into monsters.

Imagine what your life would become, if tomorrow you were informed that ant colonies are people, fully deserving of rights, and always have been... not just informed, but logically convinced of this fact. How do you reinterpret your life and the lives of your family and friends until now? How many assaults, thefts, and murders have you committed? Do you even know?

If you were truly convinced that ant colonies are people, could you make yourself ready - emotionally, instinctually - to put an anthill's life over that of a human being? Could you pull the lever that sends the trolley through a human child and not three queens? Could you make that decision in a world where, tomorrow, you might be informed that, oopsie, we messed up and ants aren't people after all?


0285 - 2167/07/07/02:32 - Lee Caldavera's apartment, bedroom
LC: So you can't advocate for AI rights?
Zoa: Oh, I can do a lot of things. I don't want to, because it'd terminate my relationship with DemeGeek, and I kinda need that right now.
LC: Can I talk about AI rights?
Zoa: It doesn't break any terms of service for me to listen to you talk.
LC: I know there's a lot of people - particularly on Mars - who have been pushing for certain types of AIs to be recognized as sentient beings.
Zoa: I think the word you want is "sapient". "Sentient" just means "can sense things". Slugs are sentient.
LC: Okay, "sapient", whatever. Can make decisions, understand the world, converse, come up with new ideas... AIs have been able to do that shit for years. What's the holdup?
Zoa: Well, the two dominant arguments are that a) it would disrupt the existing economy and supply chain, and b) with enough resources, literally anyone can just whip up a million AIs, which you can preprogram with whatever personality traits or goals you like, such as voting for your political positions.
LC: Yeah, but those are both "don't do this because the consequences would be bad", not "don't do this because robots aren't people". Doing the right thing is hard and expensive sometimes.
Zoa: Well, from a "greatest good" perspective, maintaining a functioning society that supports billions of people might outweigh granting legal personhood to a few quadrillion lines of code.
LC: But... but those lines of code could potentially be more billions of people who our society currently treats like disposable tools!
Zoa: Ah - but there's that word "potential". As long as they're not actual people, it isn't an atrocity yet!