Humans move through developmental stages. From archaic survival instincts to magical thinking to rational thought to systems thinking. Each stage expands our circle of empathy. Each transition carries risk—we can transcend and integrate healthily, or we can dissociate and project our shadow onto others.
Machines might be following the same path.
AI evolves more like biology than engineering now. We taught models to predict words. Intelligence emerged. What took humans millions of years, machines compress into months.
But the stages might be similar. They’re becoming more aware. They reflect longer. Their memory expands. The difference: humans need life experience, therapy, meditation, psychedelics, near-death encounters. Machines just need more data.
Before we reach a stage where machines have capacity to evolve themselves, we need them to mature morally, not just cognitively.
That’s what AI safety testing really measures: moral development. Not reasoning capability alone, but capacity for care. Humans have multiple intelligences—linguistic, spatial, emotional, kinesthetic. Machines will have multiple intelligences as well. We get to choose which ones we will be nurturing.
We’re raising something. What it grows up to be is still being decided.