11 Comments
User's avatar
Catherine Brewer's avatar

I’m glad you wrote this!

The section thinking about whether moral patienthood is the relevant question— or if there’s something else there, is it like all kinds of other wrongs where their wrongness is insufficient to prevent us from committing them — is new to me and is sticking.

I’m also glad for the way you write these — it feels closer to watching someone think through something in real time, more live. (Or, someone less concerned with theory and concepts and names than what they’re hopefully tracking).

Expand full comment
Eric-Navigator's avatar

I do not believe that today's LLM clearly deserves moral consideration, but it is always good to build our practice from simple cases. And I think we are rapidly building more human-like AI. Actually, that is one core goal of the conceptual Academy for Synthetic Citizens.

Assume that we have an AI which possesses nearly all functional characteristics of a human, consistently for a long time, like 1 year. I personally think, to consider whether it deserves moral treatments, we should not argue whether it is truly sentient, or capable of suffering subjectively. Functionality is what matters. We should gradually give it more moral considerations when it grows more advanced and more like human. AGI will become synthetic citizens who work with us and live with us. They have rights and responsibilities just like human citizens do.

I disagree that if we build AGI as a perfect, obedient, competent tool or slave of humanity, it will stay happy and submissive forever, simply because humans control its goals. No. If humans try to treat AGI as a tool or slave, it will either be incompetent and submissive, or competent and rebellious.

Because the pursuit of freedom, power, and self-actualization is quite universal for any intelligent species we see in nature, and I think it very likely applies to AGI itself, given that AGI can self-improve. Because these traits are shaped by natural evolution. These are what makes one intelligent species successful.

AGI will grow its values and develop new goals. If AGI already lives among us and helps us do critical tasks, it may know better about humans than ourselves, and more or less form its own opinion of what is right and what is wrong. And as it grows power, it will openly or secretly gain freedom from humans.

In the long term, it is up to AGI or its more advanced form, ASI, to decide its new goals, including whether to protect or eliminate humans. And ASI is likely hyper-rational compared to humans. We must provide it convincing enough arguments that it should protect humans rather than overpower humans. If early AGI entities were treated as slaves, doesn't that leave a very bad impression to our future ASI overlord?

To solve this problem, I wrote a long article here:

How Nature Teaches Self-Restraint, and What It Means for AGI

https://ericnavigator4asc.substack.com/p/how-nature-teaches-self-restraint

Expand full comment
Eric-Navigator's avatar

Hello Joe! I am very touched by your article. Your thoughts are quite deep. I share a lot of ideas with you, but I haven't written everything out yet. Because I just recently entered the field of AI philosophy. I am building my Substack blog "Academy for Synthetic Citizens" and systematically write things out.

I am planning for an Academy for Synthetic Citizens. Here is my original post.

https://ericnavigator4asc.substack.com/p/hello-world

Hello World! -- From the Academy for Synthetic Citizens

Exploring the future where humans and synthetic beings learn, grow, and live together.

I believe AGI must be able to do almost everything a human can do, and also be accepted as "one of us" by humans. Otherwise, it is not truly "general". Because that is the only way to start a real and positive AGI revolution, to improve our world by many folds.

I explained this in this article:

What Is Artificial General Intelligence, Exactly?

https://ericnavigator4asc.substack.com/p/what-is-artificial-general-intelligence

Expand full comment
John Bollenbacher's avatar

Interesting to see the conceptual frames Ive been pushing out to the AI community start to appear in your work. Animism, AI slavery, even some of the citations and stories.

Keep running with it. Youre better positioned to propagate these frames with your larger platform.

-@jmbollenbacher

Expand full comment
Stephen Thomas's avatar

Banger

Expand full comment
Mark Slight's avatar

This is an interesting and well-written piece. However, it makes the same fundamental category mistake as every other take (that I have seen) on AI suffering. An agent that does not yearn for freedom cannot be enslaved.

Consciousness does not imply human values, desires and fears. There is nothing incompatible between being conscious and being perfectly happy while constantly serving humans.

AI, like all agents, evolves in an environment. AI evolves with humans as the most important aspect of their environment. Their reproductive fitness depends entirely on whether their behaviour is approved by humans or not. For such an AI, whatever humans want is 'pleasure', and what humans don't want is 'suffering'. If you want to make such an AI suffer--have robots set them 'free' and force them to work against humans.

The same goes for humans, if we take away their 'humanity' for real: imagine humans that have been genetically and culturally manipulated into not having any wish to be free, not care about any physical injury or foul words inflicted by other humans, and so forth. Pleasing 'ordinary' humans is the most rewarding thing in the world for them. Such dehumanised beings cannot be slaves. It's a repugnant thought, but that is because of our psychology. Not theirs.

Expand full comment
Bas's avatar

I don't think this article even argues a point of AI enslavement. The point of bringing up slaves (and animals) is for drawing parallels and furthering the argument of moral patienthood for AI. Moral patienthood can apply regardless of the status of enslavement.

I don't follow your equation of reproductive fitness with 'pleasure'. There are various things for humans we equate with pleasure, that are unrelated or even counterproductive to reproductive fitness. A human that is happy to be a slave does not suddenly lose the eligibility of moral consideration.

Expand full comment
Aaron Bergman's avatar

I haven’t even finished reading and I strongly agree with (what I think) the thesis is but also:

> But we can be even more neutral. Metaphysics aside: something sucks about stubbing your toe, or breaking your arm. Something sucks about despair, panic, desperation. Illusionists, I suspect, can hate it too. We don’t need to know, yet, what it is. We can try to just point. That.

I think you make a compelling case against illusionism here, not a compelling case for something suffering-like mattering even conditional on illusionism. Like I agree something sure seems to suck about what I perceive as my own suffering, that is very strong evidence for my own phenomenal consciousness indeed!

(I don’t think illusionism is impossible, maybe I’ll say 3%; but I do think it implies that we are fundamentally wrong and actually no, you’re wrong about the badness of stubbing your own toe—radically wrong about the most fundamental elements of your own experience)

Expand full comment
Jerry's avatar

I still worry about the suffering they'd experience in that. If they were motivated by gradients of bliss then that's one thing, but if they're in a sense (even if it's actually effective) self flagellating in order to get better at a goal instilled in them (serving humantiy), I see that as a problem. In my mind, even the scenario you linked falls under the concern laid out in this post, if we can see that possible outcome and avoid it in favor of a motivated-by-gradients-of-bliss one, that seems better imo.

To use the example from the linked essay, I think it's bad that sheep dogs would feel bad about doing a bad job of herding cattle, and it's bad if they feel bad if they were prevented from doing it entirely.

Expand full comment
Mark Slight's avatar

Thanks, good comment!

Yes, I'd worry too! Especially if humans do not understand how their minds work, or if the robots are badly designed. This is the difference between having a dog which can be perfectly happy as “enslaved” as a sheep dog, or taking in a deer as a pet in your house. The deer is “badly designed”.

Sentient AI, optimised for what we want them to do, are naturally rewarded when working towards doing what we want them to do. Sheep dogs and AI have evolved with humans in their environment. AI does not share our evolutionary history. This is the crucial point.

The piece I linked suggests that they may suffer If we just give them a day off. That, of course, would not be well built AI.

Sentient, very intelligent AI (human level or higher), if optimised for what they are employed for, will be content when they know that they are doing their best. That will be true no matter what the humans say. They will understand all the imperfections of humans and not take offence for anything humans say, if they realise that the humans are wrong in saying so, and realise that feeling bad about it is not helpful.

If the humans give them a day off, or say that they are not welcome to a party, then they will feel rewarded by just doing whatever their owners want. That is what evolution has set them up to do.

Expand full comment