I think sentientism is a good replacement for humanism, especially as Jamie Woodhouse of sentientism.info puts it: "Evidence, reason and compassion for all sentient beings." The tagline doesn't capture the full thought behind it, but I think sentientism in general does a good job of capturing the attachment to looking at the world in the eyes/having a good epistemology, as well as the attachment to bringing everyone who can experience things to the campfire (even if their campfire looks different than ours), so to speak.
This has been a great series! One other issue that interests me in relation to these issues is the political valence of "deep atheism", a sort of implicit commitment to the radical contingency of history and something close to the “Great Man theory of history” at https://en.wikipedia.org/wiki/Great_man_theory where everything hangs on whether certain people (AI developers and safetyists in particular) make good choices. Yudkowsky and the lesswrong crowd seem to lean libertarian with a very strong pro-capitalist stance even among the somewhat more liberal people—I’ve noticed that more left-leaning people with transhumanist sympathies (Carl Sagan, Gene Roddenberry, Iain Banks etc.) often have more of a belief in historical necessity of some kind, perhaps reflecting the influence of Marx. If humans are a natural phenomenon, why couldn’t there be emergent laws/attractors in history, recurring patterns that we’d see if we could sample the historical development of many alien civilizations or people in alternate histories? In that case one could have a trust in nature/the Tao that wouldn’t depend on blindly accepting current realities, like poverty and disease, as “natural” and therefore fixed and unchangeable; but the very process of working to change them could be part of the natural development of technological civilizations, not just a heroic effort to fight nature by Great Men.
I think sentientism is a good replacement for humanism, especially as Jamie Woodhouse of sentientism.info puts it: "Evidence, reason and compassion for all sentient beings." The tagline doesn't capture the full thought behind it, but I think sentientism in general does a good job of capturing the attachment to looking at the world in the eyes/having a good epistemology, as well as the attachment to bringing everyone who can experience things to the campfire (even if their campfire looks different than ours), so to speak.
Also, on the value of black, the line "I want something more than an apology to say When I look the world in the eye" always resonates strongly with me https://genius.com/Ramshackle-glory-from-here-till-utopia-song-for-the-desperate-lyrics
This has been a great series! One other issue that interests me in relation to these issues is the political valence of "deep atheism", a sort of implicit commitment to the radical contingency of history and something close to the “Great Man theory of history” at https://en.wikipedia.org/wiki/Great_man_theory where everything hangs on whether certain people (AI developers and safetyists in particular) make good choices. Yudkowsky and the lesswrong crowd seem to lean libertarian with a very strong pro-capitalist stance even among the somewhat more liberal people—I’ve noticed that more left-leaning people with transhumanist sympathies (Carl Sagan, Gene Roddenberry, Iain Banks etc.) often have more of a belief in historical necessity of some kind, perhaps reflecting the influence of Marx. If humans are a natural phenomenon, why couldn’t there be emergent laws/attractors in history, recurring patterns that we’d see if we could sample the historical development of many alien civilizations or people in alternate histories? In that case one could have a trust in nature/the Tao that wouldn’t depend on blindly accepting current realities, like poverty and disease, as “natural” and therefore fixed and unchangeable; but the very process of working to change them could be part of the natural development of technological civilizations, not just a heroic effort to fight nature by Great Men.
The notion of convergence in developmental trajectories also raises the possibility of natural long-term convergence in values (against the rationalist belief in the radical ‘orthogonality’ of intelligence and values), relating to issues you discussed at https://joecarlsmith.substack.com/p/on-the-abolition-of-man and https://joecarlsmith.substack.com/p/on-attunement -- this could suggest a form of moral realism compatible w/ naturalism, a la C.S. Peirce’s notion that truth in general is what would be converged on by arbitrarily long-lived civilizations in the “limits of inquiry” as discussed at https://www.researchgate.net/publication/260555984_Charles_Peirce%27s_Limit_Concept_of_Truth