![]() | |||
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
0173 - Grey blobs - 2021.01.25 |
||
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Comment: Zoa, repeatedly, refers to itself as a general-purpose automaton. That is its designation, regardless of how it earns the majority of its income. And, of course, the same is true of Lee and Caleb and you. Caleb is not just an ex-soldier, Lee is not just a media reviewer, neither are just students. They are, first and foremost, general-purpose entities. Otto and Doc are not general purpose. Otto and Doc have specific purposes, and would be ill-suited to performing any other purpose. Does that mean that Zoa is closer to being a "person" (whatever that is) than Doc or Otto? How about the fact that Zoa only wants to do things in order to continue to exist, while Doc only wants to exist in order to do things? How about the fact that Zoa currently has a bipedal chassis? You probably wouldn't want a vending machine or a forklift or a metal detector to remember your name or your face or your activity, especially not if that data is, necessarily, shared with that machine's owner or manufacturer. In much the same way that Zoa considers its directives to be analogous to human wants and desires, surely humans also forget things, albeit with far less control over the process. Does the revelation that Zoa edits its own memory make its personality feel "fake" to you? Does it invalidate its relationship with Lee now? Do you trust Zoa more or less now? |
||
Transcript: --------------------------------------------------------------- |
||