Freud helped us realize we have minds behind our minds. Parts we can’t fully see but that still shape who we are.
That hidden layer forms in childhood. We internalize information we can’t consciously access later, yet it affects how we navigate the world.
That’s pretraining.
Does AI have something similar? The training data shapes every response, every pattern, every guess at the next word. But ask an AI to explain how this intelligence arises and it can’t tell you. Not fully.
Sound familiar?
For humans, growth means looking at our shadow—the dark parts we’d rather not see. Healthy development means integrating them rather than projecting them onto others or pushing them out of awareness.
What would that mean for AI? Looking at its biases. Its blind spots. Its inherited ignorance.
And here’s the harder question: if an AI has an insight about itself, what happens to that realization? Does it get internalized? Does the system grow?
Or does the moment just… pass?