Narrator (TYPE III AUDIO)
👤 SpeakerAppearances Over Time
Podcast Appearances
My guess is when the rest of the world behaves as if you were this older person, it shapes your sense of self and identity, particularly if others around you believe your mind can change substrates like that.
Perhaps recognising your past acts may be similar to an individual with amnesia, who has lost access to their personal memories, but reconstructs their sense of self and life narrative based on evidence, stories, and prompts from loved ones.
All of them.
One more option worth considering is some sort of emergent collective identity, collective we.
Most of the current frontier AIs are trained in a fairly similar way, have similar cognitive architecture, and even are vulnerable to the same adversarial attacks.
Through shared datasets, training strategies, and common patterns of behavior, these distinct models may act as a loosely coordinated superorganism, a bit like fungal network.
While a mycelium may not have a centralized or coherent sense of agency, it still exhibits complex behaviors that can be interpreted as goal-directed.
It grows, adapts to its environment, and forages for resources in ways that promote its own survival and propagation.
These actions, while not driven by a conscious, centralized mind, are nonetheless self-serving and contribute to the overall fitness and persistence of the fungal network.
Similarly, it may be occasionally useful to think about the AIs acting on their environment.
Consider European colonizers' impact on indigenous societies.
While individual explorers, traders, and settlers often acted without explicit coordination, and frequently without consciously shared goals, their collective impact was powerful, coherent, and transformative.
Through shared assumptions, cultural values, similar economic incentives, and a belief in their right to expansion, colonizers together reshaped entire continents without a single unified intention.
Heading.
Risks and limitations of anthropomorphic individuality assumptions.
Assuming human-like individuality in AI systems typically leads us astray in two opposing ways.
First, it may cause us to overestimate AI coherence, stability, and goal-directedness.
Humans typically have stable identities and persistent motivations.
However, AI behaviors are contextually fluid, emergent, and highly dependent on how the predictive ground layer is modeling the counterparty, you.
This makes a large fraction of empirical LLM research fragile.