|
Comment:
The Turing test is an interesting thought experiment, but anyone who works with computers could tell you that being able to speak like a human is absolutely not indicative of human-level intelligence. Hell, Alan Turing proposed his test in 1950, and by 1966 Eliza was already fooling people into thinking it was smarter than it really was. From Seaman to Siri, tricking folks into anthropomorphizing a line or two of code just isn't that hard - there's a reason the first program most people learn is "Hello World".
And, of course, anthropomorphizing a computer is certainly not the worst outcome of using fluency and fluidity of conversation to determine who does and does not get treated like a person. I'm certain every neurodivergent individual who's had to watch an autistic-coded robot character on TV knows how some personality types are perceived as more human than others.
But, of course, the question here isn't whether or not Zoa and Doc are people - it is presupposed that they are not. The question is whether or not Lee developing the habit of treating them like people is a laudable and meritorious course of action. Is someone who says please and thank you to their toaster engaging in effective training to be polite to their neighbours? Can the golden rule apply to something that doesn't have human-like wants and needs? Is being sentimental towards a Furby a mark of empathy, or merely masturbation of the maternal instinct?
If Zoa was just as intelligent as it is now, but couldn't hold down its end of a meandering philosophical conversation, would Lee give a shit about it? Would you?
|
|
|
Transcript:
---------------------------------------------------------------
0190 - 2167/07/06/16:38 - Lee's apartment, living room.
[Doc's monitor shows a progress bar]
LC: Doc?
Zoa: It's installing Premium mode, apparently.
LC: Oh, cool! Yeah, yeah, I can see that Therapro took thirty creds from me. Nice!
Zoa: "Nice" indeed. So you're planning on being "nice" to AIs, now? What, exactly, does that entail?
----------------------
LC: Well, Zoa, I realized that it was hypocritical of me to only care about your well-being because of your appearance or the fact that I enjoy spending time with you.
Zoa: I don't know if that's hypocritical, exactly, that's kind of how most people react to most things.
LC: Yeah, but, I mean... your face isn't who you are. Your hair isn't who you are. Those are just... I mean, they're like my nose or my boobs, they don't necessarily accurately reflect my personality or my worth as a person.
CP: Y-yes. J-judging someone for wh-what th-they look like is... is bad. I think we can agree on that.
----------------------
LC: I think maybe I was so paranoid that other people were going to judge me by my exterior because I was projecting - because that's what I do to them. Well, no more!
Zoa: I mean... you need to exercise some judgment, Lee.
LC: I will! I will! It's fine! I just am also choosing to be the sort of person who treats AIs the way they want to be treated, because if I get in the habit of doing that, it'll make me the sort of person who treats people the way they want to be treated, and that will make me a good person and not a waste of space and resources!
----------------------
CP: That's a... that's a st-stretch, b-but I... I think I get it? Y-you really need to p-put a limit on which AIs you're g-going to be nice to, though. I m-mean, Otto was an AI. Your b-butlerbot is an AI. The algorithm that turns the l-lights on and off in here is an AI, I think.
LC: It's all about getting in the habit of being nice to people, so AIs that make me feel like they're people are the ones that I'll be nice to. Y'know, robots I can have a conversation with!
Zoa: Hoo boy, am I ever glad my social interaction software is up to date...
---------------------------------------------------------------
|
|