4 Comments
Jan 2Liked by Joe Carlsmith

This is really good. I connected with it. I wonder what vignettes we could have of different relationships with AI

Expand full comment

I think this is nonsense. For one thing, neither bears nor octopuses, so far as we know, conceive of themselves a more intelligent than humans, but an AGI is more likely to have such a conception "innately", due to training materials, or to discover it quickly. What is our attitude toward insects, or herring -- these are probably better models for how AGIs might think of us.

A no less compelling argument than the above is based on the fact that humans are the designers of AIs, and human prejudices about how organisms interrelate will dominate over how bears or octopuses relate to us. Currently, the dominant concept in ecology textbooks (authored by humans) is competition. This is also the dominant concept among AI designers in Silicon Valley and elsewhere. The cultures that are aiming to produce AGIs are not cultures of generosity or peaceful co-existence. Moreover, the individuals or community working toward the production of AGIs exhibit the selfish notion that they are superior beings who are best-positioned to make the decision about whether to produce these (putative) entities that will impact everyone else in, at a minimum, human society. You are what you eat; and AGIs will be made of the attitudes that feed them.

Expand full comment

Lovely. Philosophy at its best is doing exactly this: thinking in advance, trying to prepare us for that which can't be fully prepared for, in whatever ways one thinks can help.

It's impossible not to read this and wonder how gentleness reconciles with fear. Perhaps in some form of "gentle strength"? My sense is that gentleness requires a certain strength, a confidence to know that your interactions are ones that will not cause harm. But strength also ensures that your gentleness is not a vulnerable one. Without strength, is gentleness just a cope for fear? Without strength, can gentleness convey true care and reverence, respect and curiosity?

Expand full comment

Hi -- I enjoyed this essay, which I read at Lesswrong before coming here and subscribing in order to ask you what you think about something that's been bugging me. This issue arose again for me when I saw your mention of "an AI in a female-robot-body".

Your use of the preposition "in" suggests an image of the entities that people call "AIs" as incorporeal things that might be nowhere at all, like Descartes' souls, or that might inhabit inhabit mechanical bodies such as computers or robots, like the spirits that people usually imagine. A problem here is that when people use a chatbot they tend to say that they're interacting or even conversing with "an AI" -- so, if they imagine this entity as a soul or spirit do they imagine it as a nowhere-soul that is causally associated with numerous computers or do they imagine it as causally associated with OpenAI's or Google's or Facebook's gigantic computer-system (really just a single gigantic body?) in San Francisco or Topeka or wherever these mechanical bodies are located? Or as permeating these multiple machines, or a single giant machine, in the way in which spirits are imagined as permeating human bodies at least until these bodies disintegrate, after which these spirit flit about on their own, perhaps haunting people?

I think that a lot of people who speak in this way about AIs -- as though they're incorporeal entities -- at the same time believe that we're conscious bodies. Wouldn't they speak more consistently, then, if they said that they tell their computer, or the gigantic OpenAI computer, to generate some piece of writing for them? The mechanical body, not the "program" that it is "running", would be the thing that might or might not be intelligent. And the program would just be the way in which this mechanical body behaves or tends to behave. It would be made to behave in this way by the depression of certain keys on its keyboard in a certain sequence -- an event that might be called "entering the code".

But perhaps we're souls, and AIs are likewise soul-like entities; the depression of certain keys in a certain sequence on a machine's keyboard might invoke a soul from the treasury of souls; this soul might then enter the machine, just as spirits might enter zygotes or fetuses. Or God, responding to the supplication, might decide to generate a spirit within the machine.

What do you think about this?

Expand full comment