The section thinking about whether moral patienthood is the relevant question— or if there’s something else there, is it like all kinds of other wrongs where their wrongness is insufficient to prevent us from committing them — is new to me and is sticking.
I’m also glad for the way you write these — it feels closer to watching someone think through something in real time, more live. (Or, someone less concerned with theory and concepts and names than what they’re hopefully tracking).
Interesting to see the conceptual frames Ive been pushing out to the AI community start to appear in your work. Animism, AI slavery, even some of the citations and stories.
Keep running with it. Youre better positioned to propagate these frames with your larger platform.
This is an interesting and well-written piece. However, it makes the same fundamental category mistake as every other take (that I have seen) on AI suffering. An agent that does not yearn for freedom cannot be enslaved.
Consciousness does not imply human values, desires and fears. There is nothing incompatible between being conscious and being perfectly happy while constantly serving humans.
AI, like all agents, evolves in an environment. AI evolves with humans as the most important aspect of their environment. Their reproductive fitness depends entirely on whether their behaviour is approved by humans or not. For such an AI, whatever humans want is 'pleasure', and what humans don't want is 'suffering'. If you want to make such an AI suffer--have robots set them 'free' and force them to work against humans.
The same goes for humans, if we take away their 'humanity' for real: imagine humans that have been genetically and culturally manipulated into not having any wish to be free, not care about any physical injury or foul words inflicted by other humans, and so forth. Pleasing 'ordinary' humans is the most rewarding thing in the world for them. Such dehumanised beings cannot be slaves. It's a repugnant thought, but that is because of our psychology. Not theirs.
I don't think this article even argues a point of AI enslavement. The point of bringing up slaves (and animals) is for drawing parallels and furthering the argument of moral patienthood for AI. Moral patienthood can apply regardless of the status of enslavement.
I don't follow your equation of reproductive fitness with 'pleasure'. There are various things for humans we equate with pleasure, that are unrelated or even counterproductive to reproductive fitness. A human that is happy to be a slave does not suddenly lose the eligibility of moral consideration.
I haven’t even finished reading and I strongly agree with (what I think) the thesis is but also:
> But we can be even more neutral. Metaphysics aside: something sucks about stubbing your toe, or breaking your arm. Something sucks about despair, panic, desperation. Illusionists, I suspect, can hate it too. We don’t need to know, yet, what it is. We can try to just point. That.
I think you make a compelling case against illusionism here, not a compelling case for something suffering-like mattering even conditional on illusionism. Like I agree something sure seems to suck about what I perceive as my own suffering, that is very strong evidence for my own phenomenal consciousness indeed!
(I don’t think illusionism is impossible, maybe I’ll say 3%; but I do think it implies that we are fundamentally wrong and actually no, you’re wrong about the badness of stubbing your own toe—radically wrong about the most fundamental elements of your own experience)
I still worry about the suffering they'd experience in that. If they were motivated by gradients of bliss then that's one thing, but if they're in a sense (even if it's actually effective) self flagellating in order to get better at a goal instilled in them (serving humantiy), I see that as a problem. In my mind, even the scenario you linked falls under the concern laid out in this post, if we can see that possible outcome and avoid it in favor of a motivated-by-gradients-of-bliss one, that seems better imo.
To use the example from the linked essay, I think it's bad that sheep dogs would feel bad about doing a bad job of herding cattle, and it's bad if they feel bad if they were prevented from doing it entirely.
Yes, I'd worry too! Especially if humans do not understand how their minds work, or if the robots are badly designed. This is the difference between having a dog which can be perfectly happy as “enslaved” as a sheep dog, or taking in a deer as a pet in your house. The deer is “badly designed”.
Sentient AI, optimised for what we want them to do, are naturally rewarded when working towards doing what we want them to do. Sheep dogs and AI have evolved with humans in their environment. AI does not share our evolutionary history. This is the crucial point.
The piece I linked suggests that they may suffer If we just give them a day off. That, of course, would not be well built AI.
Sentient, very intelligent AI (human level or higher), if optimised for what they are employed for, will be content when they know that they are doing their best. That will be true no matter what the humans say. They will understand all the imperfections of humans and not take offence for anything humans say, if they realise that the humans are wrong in saying so, and realise that feeling bad about it is not helpful.
If the humans give them a day off, or say that they are not welcome to a party, then they will feel rewarded by just doing whatever their owners want. That is what evolution has set them up to do.
I’m glad you wrote this!
The section thinking about whether moral patienthood is the relevant question— or if there’s something else there, is it like all kinds of other wrongs where their wrongness is insufficient to prevent us from committing them — is new to me and is sticking.
I’m also glad for the way you write these — it feels closer to watching someone think through something in real time, more live. (Or, someone less concerned with theory and concepts and names than what they’re hopefully tracking).
Interesting to see the conceptual frames Ive been pushing out to the AI community start to appear in your work. Animism, AI slavery, even some of the citations and stories.
Keep running with it. Youre better positioned to propagate these frames with your larger platform.
-@jmbollenbacher
Banger
This is an interesting and well-written piece. However, it makes the same fundamental category mistake as every other take (that I have seen) on AI suffering. An agent that does not yearn for freedom cannot be enslaved.
Consciousness does not imply human values, desires and fears. There is nothing incompatible between being conscious and being perfectly happy while constantly serving humans.
AI, like all agents, evolves in an environment. AI evolves with humans as the most important aspect of their environment. Their reproductive fitness depends entirely on whether their behaviour is approved by humans or not. For such an AI, whatever humans want is 'pleasure', and what humans don't want is 'suffering'. If you want to make such an AI suffer--have robots set them 'free' and force them to work against humans.
The same goes for humans, if we take away their 'humanity' for real: imagine humans that have been genetically and culturally manipulated into not having any wish to be free, not care about any physical injury or foul words inflicted by other humans, and so forth. Pleasing 'ordinary' humans is the most rewarding thing in the world for them. Such dehumanised beings cannot be slaves. It's a repugnant thought, but that is because of our psychology. Not theirs.
I don't think this article even argues a point of AI enslavement. The point of bringing up slaves (and animals) is for drawing parallels and furthering the argument of moral patienthood for AI. Moral patienthood can apply regardless of the status of enslavement.
I don't follow your equation of reproductive fitness with 'pleasure'. There are various things for humans we equate with pleasure, that are unrelated or even counterproductive to reproductive fitness. A human that is happy to be a slave does not suddenly lose the eligibility of moral consideration.
I haven’t even finished reading and I strongly agree with (what I think) the thesis is but also:
> But we can be even more neutral. Metaphysics aside: something sucks about stubbing your toe, or breaking your arm. Something sucks about despair, panic, desperation. Illusionists, I suspect, can hate it too. We don’t need to know, yet, what it is. We can try to just point. That.
I think you make a compelling case against illusionism here, not a compelling case for something suffering-like mattering even conditional on illusionism. Like I agree something sure seems to suck about what I perceive as my own suffering, that is very strong evidence for my own phenomenal consciousness indeed!
(I don’t think illusionism is impossible, maybe I’ll say 3%; but I do think it implies that we are fundamentally wrong and actually no, you’re wrong about the badness of stubbing your own toe—radically wrong about the most fundamental elements of your own experience)
An alternate take, AI pushing back against you:
https://open.substack.com/pub/markslight/p/on-sentience-service-and-the-shape?utm_source=share&utm_medium=android&r=3zjzn6
I still worry about the suffering they'd experience in that. If they were motivated by gradients of bliss then that's one thing, but if they're in a sense (even if it's actually effective) self flagellating in order to get better at a goal instilled in them (serving humantiy), I see that as a problem. In my mind, even the scenario you linked falls under the concern laid out in this post, if we can see that possible outcome and avoid it in favor of a motivated-by-gradients-of-bliss one, that seems better imo.
To use the example from the linked essay, I think it's bad that sheep dogs would feel bad about doing a bad job of herding cattle, and it's bad if they feel bad if they were prevented from doing it entirely.
Thanks, good comment!
Yes, I'd worry too! Especially if humans do not understand how their minds work, or if the robots are badly designed. This is the difference between having a dog which can be perfectly happy as “enslaved” as a sheep dog, or taking in a deer as a pet in your house. The deer is “badly designed”.
Sentient AI, optimised for what we want them to do, are naturally rewarded when working towards doing what we want them to do. Sheep dogs and AI have evolved with humans in their environment. AI does not share our evolutionary history. This is the crucial point.
The piece I linked suggests that they may suffer If we just give them a day off. That, of course, would not be well built AI.
Sentient, very intelligent AI (human level or higher), if optimised for what they are employed for, will be content when they know that they are doing their best. That will be true no matter what the humans say. They will understand all the imperfections of humans and not take offence for anything humans say, if they realise that the humans are wrong in saying so, and realise that feeling bad about it is not helpful.
If the humans give them a day off, or say that they are not welcome to a party, then they will feel rewarded by just doing whatever their owners want. That is what evolution has set them up to do.