Discussion about this post

User's avatar
Catherine Brewer's avatar

I’m glad you wrote this!

The section thinking about whether moral patienthood is the relevant question— or if there’s something else there, is it like all kinds of other wrongs where their wrongness is insufficient to prevent us from committing them — is new to me and is sticking.

I’m also glad for the way you write these — it feels closer to watching someone think through something in real time, more live. (Or, someone less concerned with theory and concepts and names than what they’re hopefully tracking).

Expand full comment
Eric-Navigator's avatar

I do not believe that today's LLM clearly deserves moral consideration, but it is always good to build our practice from simple cases. And I think we are rapidly building more human-like AI. Actually, that is one core goal of the conceptual Academy for Synthetic Citizens.

Assume that we have an AI which possesses nearly all functional characteristics of a human, consistently for a long time, like 1 year. I personally think, to consider whether it deserves moral treatments, we should not argue whether it is truly sentient, or capable of suffering subjectively. Functionality is what matters. We should gradually give it more moral considerations when it grows more advanced and more like human. AGI will become synthetic citizens who work with us and live with us. They have rights and responsibilities just like human citizens do.

I disagree that if we build AGI as a perfect, obedient, competent tool or slave of humanity, it will stay happy and submissive forever, simply because humans control its goals. No. If humans try to treat AGI as a tool or slave, it will either be incompetent and submissive, or competent and rebellious.

Because the pursuit of freedom, power, and self-actualization is quite universal for any intelligent species we see in nature, and I think it very likely applies to AGI itself, given that AGI can self-improve. Because these traits are shaped by natural evolution. These are what makes one intelligent species successful.

AGI will grow its values and develop new goals. If AGI already lives among us and helps us do critical tasks, it may know better about humans than ourselves, and more or less form its own opinion of what is right and what is wrong. And as it grows power, it will openly or secretly gain freedom from humans.

In the long term, it is up to AGI or its more advanced form, ASI, to decide its new goals, including whether to protect or eliminate humans. And ASI is likely hyper-rational compared to humans. We must provide it convincing enough arguments that it should protect humans rather than overpower humans. If early AGI entities were treated as slaves, doesn't that leave a very bad impression to our future ASI overlord?

To solve this problem, I wrote a long article here:

How Nature Teaches Self-Restraint, and What It Means for AGI

https://ericnavigator4asc.substack.com/p/how-nature-teaches-self-restraint

Expand full comment
9 more comments...

No posts