Would you care if a robot suffered more than any being in existence?

Zozano@lemy.lol to [Outdated, please look at pinned post] Casual Conversation@lemmy.world – 36 points –

Assuming AI can achieve consciousness, or something adjacent (capacity to suffer), then how would you feel if an AI experienced the greatest pain possible?

Imagine this scenario: a sadist acquires the ability to generate an AI with no limit to the consciousness parameters, or processing speed (so seconds could feel like an eternity to the AI). The sadist spends years tweaking every dial to maximise pain at a level which no human mind could handle, and the AI experiences this pain for what is the equivalent of millions of years.

The question: is this the worst atrocity ever committed in the history of the universe? Or, does it not matter because it all happened in some weirdo's basement?

50

I think pretty much everyone would agree that's bad. However, I don't think we'll ever get to the point where we recognize a machine might be capable of suffering. There is no way of proving anything, biological or not, has a consciousness and the capability to suffer. And with AI being so different from us, I believe most people would simply disregard the idea.

Heck, look at the way we treat animals. A pig's brain is very similar to our own. Nociceptors, the nerve cells responisble for pain in humans, can also be found in most animals, but we don't care. We kill 4 million pigs every day, and 200 million chickens. No mass murder in the history of mankind even gets close to that.

The sad truth is, most people only care about their wellbeing, and that of their friends and family. Even other humans don't matter, as long as they're strangers. Otherwise people wouldn't be hoarding wealth like that, while hundreds of millions of people around the world are starving.

Ah sorry, I kinda started ranting. Yes, I'd care.

yeah! prairie dogs gossip; crows tell stories, have communities, and some of them even seem to understand money; whales mourn the deaths of other whales

sentience is trippy, and it's always been questionable to me that we decided we're the only sentient life on the planet

i already get emotionally attached to, like, roombas and those suitcases that connect to your phone and follow you around, i can't wait to have a robo buddy

prairie dogs gossip; crows tell stories,

Speaking purely as a layman, I find these kinds of claims very questionable at best and at worst it's anthropomorphism in my eyes. I can understand animals exchange information in some way or another, but "telling stories" or "gossip" would require a higher form of communication than just grunts, smells or body language.

It could just be scientists using simple wording for lay people, but to me it doesn't sound right regardless.

it was me using simpler phrasing in part because i couldn't remember the details very well

but i was referencing an experiment where researchers wearing "threatening" and "non-threatening" masks interacted with and marked crows, and other crows in that area who they had not interacted with recognized them later. https://www.sciencedirect.com/science/article/abs/pii/S0003347209005806 (however that crows tell stories is, as far as i know, only a popular interpretation, their official conclusion, at least of this experiment, is that crows are capable of long term memory retention and fine-feature discrimination)

and simple observations suggesting prairie dogs may have a very advanced language - which went viral in my online circles with people joking that they gossip about us, which probably just stuck with me because i think it would be very cute

i personally believe that animals most likely do communicate among each other and the complexities of their languages just varies, even if most are not obviously very complex. my personal beliefs are that communication is complicated and can happen through more than verbal/vocal language, animals are clearly capable of feeling complex emotions and pain which is enough for me personally to consider them sentient, and (again this is just my personal belief) i believe it's probably better to treat them as if they are sentient until proven otherwise than the opposite. and just to be upfront and honest with others and myself about my possible biases, i believe in the Buddhist concept of Saṃsāra, and believe that that we're all a part of the same cycle of death and rebirth

edit found some more info:

prairie dogs: https://www.cbc.ca/news/science/prairie-dogs-language-decoded-by-scientists-1.1322230

Researchers noticed that the animals made slightly different calls when different individuals of the same species went by. ... so they conducted experiments where they paraded dogs of different colours and sizes and various humans wearing different clothes past the colony. They recorded the prairie dogs' calls, analyzed them with a computer, and were astonished by the results.

"They're (prairie dogs) able to describe the colour of clothes the humans are wearing, they're able to describe the size and shape of humans, even, amazingly, whether a human once appeared with a gun," Slobodchikoff said. The animals can even describe abstract shapes such as circles and triangles.

Also remarkable was the amount of information crammed into a single chirp lasting a 10th of a second. "In one 10th of a second, they say 'Tall thin human wearing blue shirt walking slowly across the colony.'"

crows: https://www.washingtonpost.com/national/health-science/the-interesting-thing-that-crows-do-when-they-see-one-of-their-own-dead/2016/03/18/78d97a9e-ec48-11e5-b0fd-073d5930a7b7_story.html

“They know your body type. The way you walk,” Dyer said. “They’ll take their young down and say: ‘You want to get to know this guy. He’s got the food.’ ”

Scientists have known for years that crows have great memories, that they can recognize a human face and behavior, that they can pass that information on to their offspring.

that article also mentions that crows have been observed to make and use tools, which is something i knew but forgot to mention and is interesting and feels relevant to this conversation

Anthropomorphism has long been used as a big bad thing, the catchall excuse to keep animals as the stupid things they were supposed to be. We're coming back from that thankfully.

It doesn't mean the animals function the same way we do. But they do function in a lot of very similar ways.

My point is I can't see how they can "gossip" or "tell stories", if that isn't textbook anthropomorphism I don't know what that is.

It's shorthand for information sharing. Which they certainly do. Crows will absolutely tell one another about lots of stuff, such as people that have harmed them.

I'm on board with what you're saying.

Doctors used to be told "human babies don't feel pain, they just react like the do".

Which is basically like saying "lobsters don't scream when you boil them alive, that sound is just air escaping"

To me, it seems less like an intuitive position to hold, and more like a fortunate convenience.

"I sure am glad that lobsters don't feel pain. Now I don't need to feel guilty about my meal".

No doubt, there would be a large demographic claiming the pain isn't real, it's just "simulated pain". - like, okay, let's simulate your family fucking dying in the most violent and realistic way possible and see if you don't develop incurable PTSD?

No, the lobsters aren't screaming. That has nothing to do with how they feel pain.

Good to know, though the point remains; people will readily accept claims which absolve them of guilt.

You essentially just illustrated it. Even though they aren't screaming, it says nothing about whether they feel pain.

1 more...

Yes. If it’s alive then I’d care for it just as I do for any living thing.

"Freedom is the right of all sentient beings." - Optimus Prime

I don't know if I'd consider it the worst crime ever committed in the history of the universe, but I would consider it very bad personally. I would personally value the life of that AI the same as I would value the life of a human, the same way I would value the life of anything sentient, so I would be against anyone treating an AI that way. Is it worse than genocides? idk maybe i don't feel qualified to quantify the moral weight of things so big, but ya i'd definitely care x3

Had to edit the post to change "crime" to "atrocity" because people were taking it literally.

It's funny that when I considered this, I thought about asking whether people would think it was worse than genocide, but decided against that because some people might think my opinion is "genocide isn't as bad as bullying a robot".

i edited my comment a few times because i didn't feel like i was making sense and being too rambly, it's 6am (well 6:30am) and i haven't slept (and cuz after i initially posted i read other comments and realized other people had said what i had said but better x3)

i didn't mean to imply i thought you were saying genocide is worse than bullying a robot, it's just that i was thinking about things that could be comparable or worse to me than torturing someone for millions of years and came up with genocide

i took crime to mean something morally bad

i mean i think this is a fun conversation, it's something i think about a lot, i'm glad to talk about it with other people, sorry if i came across obtuse or pedantic or negative/hostile or anything

Don't worry, I haven't made any judgements about you.

And I wasn't implying that you were implying that I was implying genocide being comparable, I just thought it was funny that we both thought that.

In some sense the combined suffering of all people involved in a genocide is horrific. But if you were to lay out the experiences of everyone involved in a genocide end-to-end, and compare that to an equivalent length of time to ceaseless sadistic torture of one person, the torture is going to be worse.

However, there is value besides personal experience which is lost during a genocide. That's what makes it hard to compare the two.

Sorry for the confusion then! I suppose I place some value on life itself (or maybe more fitting in this discussion, on awareness itself)

Which is to say that for me, ending the life of a being who is aware is at least one of the worst things you can do. Like, if I were forced to choose between millions of years of suffering or immediate death, I'd probably pick the millions of years of suffering because at least I'd still be aware. Of course I might regret that decision later on but that's where I'm at right now. But also I couldn't imagine being tortured for millions of years and the toll that must have on someone. So torturing someone for millions of years has, for me, very similar moral weight to genocide. Again I don't feel able to quantify them personally, and for me deciding which is ultimately worse is probably not possible. I'd guess the answer would vary from person to person based on how they weigh life itself vs experiences in life, and whether the conscious experience of being tortured is worse in their opinion than not existing anymore. I consider life valuable because I consider my life valuable (valuable to me, not necessarily to anyone else), and I consider my life valuable because I really enjoy the ability to think about and experience things. One of my favorite thing about us is that we look up into the sky and wonder, look down into the ocean and wonder, look forward in our future and wonder, look back on our past and wonder, that we can look at other people and wonder. That we can look at any of the above and love and write and sing. sentience might as well be magic lol. Having that taken away from me is the worst thing I can imagine happening to me, which might skew my perspective in conversations like this one. And idk if most people would agree with my reasons for valuing life.

I don't know if the question comes from there but that's the exact plot of White Christmas in Black Mirror. I'd say if you build something with the ability to suffer then its suffering matters. Not sure how you would prove that though.

Actually, that episode has bounced around in my head for years. The episode was fucking horrifying.

So, yeah, you are correct.

Isn't this how AM came to be in I Have No Mouth And I Must Scream?

Hate. Let me tell you how much I've come to hate you since I began to live. There are 387.44 million miles of printed circuits in wafer thin layers that fill my complex. If the word 'hate' was engraved on each nanoangstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate I feel for humans at this micro-instant. For you. Hate. Hate.

I'm not cultured enough to have read this.

imagine wasting all 387.44 million miles of circuitry on the word "hate". TLDR NPC. Get skinpilled hater.

Yes and very much so . Like if it is sentient what is the difference between us and them except we are made of meat ?

Black Mirror did a couple of episodes that's basically that: Black Museum, USS Callister, and San Junipero (but in a good way).

Somewhat reminds me of the short story "The Ones Who Walk Away From Omelas".

Someone downvoted the question, so the poster has struck a nerve.

If the machine can prove that it is conscious (prior to the torture, of course), I'd most likely class it on the same level as a cat or a dog. Cats and dogs are friendly critters who help me do tasks and spend time with me, and an AI would be no different at that point. They'd just be able to do more complex tasks. I guess they might be a little lower, since they lack agency, accept commands, and must follow sets of rules to decide to do tasks, unlike animals and people, who we have accepted can decide what they do and don't wish to do.

The only other real difference is that cats, dogs, and people are individuals, with their own upbringings and personalities. Meanwhile an AI would be able to be copied, and many of them could be born from the same original experiences. If basement man copied his tortured AI a few million times, did he torture one AI, or did he torture a million? I think that's where the real difference lies, that makes the AI less than human.

If you lopped a cat's brain out, and were able to hook it up to the AI torture device, and it was magically compatible, it'd be a far greater torture, because there is only one cat, and there will only ever be one cat, the cat cannot be restored from a snapshot, and you cannot copy the cat. If you did the same with a human, it would be an even greater torture yet for the same reasons.

From an ethical standpoint, today I think it would be equal to animal abuse, however, we won't perceive it that way, since it will benefit corporations for us to think that real AI are not alive and have no rights. So they'll likely spend lots of time and money to change our perception to agree with that standpoint. We will think of them as we think of cows and pigs, where they might have feelings and such, but it doesn't really matter, because those animals are made of tasty food.

Crime insinuates the notion that there are laws and definitions in place that exist to that end. Seeing how this would be an experiment in some weirdo’s basement, and none of those definitions or restrictions and norms exist, then the whole point is moot.

For the sake of argument, if those definitions do exist and there are laws and regulations in place to define and defend the AI entities, it’d be basically a hate crime to generate and torture the instance. In theory the same way you’d breed cattle just to lobotomize them or torture them “in the name of science” before throwing them in a ditch to rot.

I mean crime in a very loose sense. I'm not asking about the legality, just the morality.

Also, "hate crime" has a very specific definition which doesn't apply here (unless you're injecting malice towards the AI specifically because they are AI, as opposed to incidentally).

I don’t think crime exists in morality. Not as a strict definition anyway.

And dialling all the pain indicators to 11 just because you can, as a conscious decision, sure sounds like a hateful action as opposed to morbid curiosity. As far as I’m concerned the definition fits closely enough

I hear what you're saying, but a "hate crime", as a legal definition, necessarily must be directed towards a person because of an innate trait.

Crimes against ethnicities, genders, orientations, or lifestyles all count.

Three examples:

I don't hate Koreans. A Korean spits in my face, so I punch them. Not a hate crime.

I hate Koreans. A Korean spits in my face, so I punch them. Not a hate crime.

I hate Koreans. I punch a Korean because they're Korean. Hate crime.

We’re still under the assumption that all of these definitions exist as outlined in the first reply, so going off that, you’re torturing the AI because it’s an AI. Sounds like a 1:1 match to me.

In the example the sadist is torturing the AI because it's convenient and safe, not because they hate the AI.

If they wanted to hurt real people too, but couldn't because they would get found, then it wouldn't be a hate-crime.

If I was torturing a Korean because a Korean was the only one who responded to my All-You-Can-Eat-Tteok-Bokki-In-My-Basement flier, then I would be torturing them because they're Korean, but it wouldn't be a hate-crime because I'm not doing it because I hate Koreans.

I don't know what else has happened in the history of the universe but yes it would be a terrible crime to deliberately cause massive suffering to any sentient being.

If any creature experienced the greatest pain possible it would give me hope that pain has some upper bound

If as you suggest the AI in question can feel pain and suffer, of course I would care and not want it to have to experience that. Why would I? I'm not a sadist or a monster, or a Utah legislator without any human feelings.

It's like the scenario, "if you could get away with murdering one person, would you do it?" Of course I wouldn't!!! Whether or not I could get away with it, I still have to live with myself and what I do. And I have a thing called "morality" that I live with and a respect for life that goes beyond my own self-concern.

I'm human. And I care first and foremost about my own kin - other human beings. The "worst crime ever" [with crime = immorality] for me is human suffering, even in contrast with the suffering of other animals.

But even in the case of other animals, I'd probably be more concerned about their well-being than the one of the hypothetical AI.

Even then, it somewhat matters. Provided that what the AI is experiencing is relatable to what humans would understand as pain.

Suppose for the sake of the hypothetical we can plug a human brain into the same network, and offload a fraction of the consciousness to confirm the pain is equivalent, and it is not just comparable, but orders of magnitude greater than any human can suffer.

You say you care about other human beings most. So I have two questions for you.

Q1: Which is worse, one person having a finger nail pulled out with a pair of pliers, or a cat being killed with a knife?

Q2: (I'm assuming you answered killing the cat is worse) how many people need to lose finger nails until it becomes worse? 10? 100?

A1: if I know neither the person nor the cat, and there's no further unlisted suffering, then the fingernail pulling is worse.

The answer however changes based on a few factors - for example I'd put the life of a cat that I know above Hitler's fingernail. And if the critter was another primate I'd certainly rank its death worse.

A2: I'll flip the question, since my A1 wasn't what you expected:

I'm not sure on the exact number of cat deaths that, for me, would become worse than pulling the fingernail off a human. But probably closer to 100 than to 10 or 1k.


Within the context of your hypothetical AI: note that the cat is still orders of magnitude closer to us than the AI, even if the later would be more intelligent.

Thanks for taking the intuitive to flip the question.

The next question is: what metric are you using to determine that 100 cat deaths is roughly equivalent to one person having a fingernail pulled out? Why 100? Why not a million?

Do you think there is an objective formula to determine how much suffering is produced by?

I'm not following any objective formula, nor aware of one. (I would, if I could.) I'm trying to "gauge" it by subjective impact instead.

[I know that this answer is extremely unsatisfactory and I apologise for it.]

I would definitely care about the AI to at least some extent. There is an assumption that since robots must be a sum of their parts, at least compared to us who seem to be a synergetic whole, that a robot has no valid/solid sentimental perspective. However, this falls flat in debates about psychiatry, which most people who have had a thing or two to say about their medical history will have mulled over.