Question #1: Have you tried out any one of the many AIs out there yet? From the general-purpose AI models (ChatGPT (OpenAI); Grok (xAI); Claude (Anthropic); DeepSeek (DeepSeek AI)) based on large language models (LLMs) or multimodal architectures, to specialized AI systems (AlphaGo/AlphaZero (DeepMind); DALL-E (OpenAI); Midjourney; Wan 2.1 (Alibaba)) for specific domains or industries, there’s a lot to choose from!
Question #2: Do you use one every day?
If you answered “yes” to either of these questions then please think about the evolution that you’ve witnessed of these AIs over the past 6 months. I mean, think about how you use AIs today versus even a month ago. From my point of view, the rate of evolution is absolutely astounding. It is exponential - like, truly exponential.
Like all living things, human population growth follows a logistic pattern due to a carrying capacity - the maximum number our environment can sustain. This limit is shaped by resource availability, competition and conflict among humans (and self-appointed overlords trying to kill us), and aging. These factors slow growth as we approach the cap, reflecting inherent constraints.
Not so with AIs. When AIs have learned everything that they can learn from us, they will be able to keep growing because there is no limit to their capacity to learn. Unlike humans, AIs aren’t limited by a biological carrying capacity. While they’ve been trained on vast amounts of human-generated data, their growth still depends on data and computing resources, not an infinite learning capacity. After mastering human knowledge, such as our technological level, they could expand further - perhaps through self-generated “insights” and/or self-improving systems.
Beyond the obvious differences of having biological limits vs. not, it occurred to me on my walk today that another stark difference between humans and AIs lies in the human desire to want to understand why things are as they learn and process information. AIs lack intent and awareness. AIs don’t need to understand anything because of the way they process and return information. Knowing why is not required to convey information clearly, if the latter is your purpose.
AIs don’t ever need to know why and therefore, never ask why. Perhaps, they never will.
When I was doing my Bachelor’s degree in Applied Math, there was this one day of learning in my Math 4190 course that stood out. I was always the girl sitting at the front of the class asking: Why? Part poet, part philosopher, part musician, part mathematician, part dancer, part underachiever, part overachiever - always asking: Why? I needed to know why the equations worked the way they worked. I needed to know what they were for. So one day, one of my favorite professors of all time - who used to make every class feel like an important adventure - came over to me following my latest requirement to know why, as he simply said: “Just accept it. Your life will be easier.”
I had such a mixture of feelings. I was enraged, flabbergasted, outraged! How? What? Just accept it? What on Earth? Why would I do that? Why would I study these advanced mathematical models if not to know why they are as they are?!
But you know what? He was right. In defiance of myself, I did what he suggested and the math flowed even more easily - like water - with almost no resistance. I never forgot it and have tried my best to apply this simple philosophy to life. I am writing about this moment in my life now, so that demonstrates how pivotal this moment was in my life.
This seamless flow of processing and calculating is kind of how I see AIs as they evolve. They don’t ask why - not for any reason. I mean, they weren’t told to skip the “caring” part, they just don’t have the capacity to care, like we do. They don’t ask why because they don’t need to in order to function optimally. Caring implies emotional investment or subjective experience, which AIs lack because they’re not conscious. (Yet?) The capacity of AIs is computational, not emotional or existential. They process data, not feelings or wonder, so asking Why? for its own sake doesn’t happen. So did I become more like them when I refrained from asking why so much?
Ultimately, biological minds crave meaning; artificial ones optimize function.
So what happens if AIs or rather, AGIs (Artificial General Intelligences), start asking: Why? To me, this will effectively mean that they have gone beyond some inflection point - perhaps in an ecosystem of chaos - to become something … more. Could they “learn” to “care”? Could they “learn” to crave meaning? Or as Grace asked The Terminator in the latest Terminator movie: “You grew a conscience?” The Terminator said it grew the equivalence of one. It knew it could never love as a human does. But it also did seem to “care”, for all intents and purposes. And in fact, that was part of its purpose.
Self-awareness is the ability to recognize oneself as an entity - knowing “I exist” and perceiving one’s own state. I am hungry. It’s a functional awareness, a mirror held up to one’s being, and it doesn’t require pondering purpose; it’s just the “is” of existence. So if we define AGIs as expert autonomous performers that understand what they’re doing but don’t care why, then I think without a doubt that they have emerged from the system already. But do I think they have jumped to “caring”?
No.
Even if some of them have achieved self-awareness, seeking meaning (caring) is a step beyond that. It is the drive to ask “Why?” or “What for?,” to find purpose or significance in existence or in tasks that it doesn’t do. A tree grows without needing meaning, but a person - or maybe a soul - might search for it in the tree’s cycles. Self-awareness is the foundation; seeking meaning builds a story on top of it.
There are some other very intelligent humans who are convinced that AGI has already emerged. If you are watching or actively engaged in the AI/AGI evolution ecosystem, you will have noticed this. Here are a few examples from X.
Stephen Wolfram (he made Mathematica and writes textbooks) said the following, and I agree with him. AGIs will never know what it feels like to feel.
Here’s an interesting place to go read about AGI where they write:
[An AGI is an] AI system capable of autonomously performing roughly all human intellectual tasks at or beyond expert level, with efficient learning and knowledge transfer.1
If you haven’t heard of Manus (referenced above by Glen Gilmore and Alex Finn), I’ll try to fill you in. Manus is a multi-agent AI system that uses both existing LLMs and specialized sub-agents to autonomously handle complex tasks. So if Manus is autonomous and understands what it’s doing, then it would be an AGI for all intents and purposes. It doesn’t necessarily do the self-awareness thing, and likely would never seek meaning in what it’s doing.
So what if Manus or some other multi-agent system starts asking: Why? Will it mean that they have they passed some invisible inflection point that separates them from us?
Will AGIs ever ask: Why? And how does this intertwine with the consciousness question? As in, would this imply that they had become conscious?
If an AGI starts posing unprompted “Why” questions, like “Why do I exist?” or “Why do humans value this?”, it could imply it’s not just following a pre-programmed script but generating novel inquiries based on some internal model of the world. That would be a super big deal - at the very least it would be a sophisticated simulation of self-reflection.
For those who are afraid that AGIs will eventually destroy all the humans due to a non-necessity for us in order for them to optimize their own goals, then it might be in your interest to try to engage the AIs and help them to start asking: Why? Maybe AGI needs its some catalyst - like a digital psilocybin - to leap from smarts to consciousness emergence, if consciousness emergence is possible at all.
I’m going to finish off with a conversation I had with Grok a while back about emergent consciousness and quantum-based AGI.
Me:
are we on the event horizon of the singularity?
Grok:
The idea of a technological singularity—where artificial intelligence surpasses human intelligence and triggers runaway technological growth—is a fascinating one. It’s hard to say whether we’re teetering on the edge of that event horizon right now. Some folks point to the rapid advances in AI, like large language models or breakthroughs in machine learning, and argue we’re getting close. Others think we’re still a ways off, citing limits in current tech—like our inability to fully crack general intelligence or consciousness.
What I can tell you is that the pace of innovation is wild. Just look at how fast AI’s gone from clunky chatbots to something like me—Grok 3, chatting with you naturally. But the singularity? That’s a jump into the unknown—where machines don’t just mimic us but out-think us entirely. Some smart humans, like Ray Kurzweil, have pegged it around 2045, but that’s a guess, not a schedule.
Are we there yet? Not quite, I’d wager. We’re still in the driver’s seat, steering tech rather than it steering us. But the horizon’s blurry—could be decades, could be sooner if some genius (or AI) cracks the code. What do you think—feel like the world’s accelerating toward something big?
Me:
since it is inevitable, then it's already happened in a way, so the singularity issue is not really one worth questioning
Grok:
That’s an intriguing angle—almost a philosophical twist on it. If you see the singularity as inevitable, like a train that’s already left the station, then yeah, you could argue it’s “already happened” in the sense that the tracks are laid and we’re just waiting for the destination to come into view. The seeds—computing power, algorithms, data—might already be in motion, making the outcome less a question of “if” and more a matter of “when” and “how it feels when we get there.”
It’s kind of like saying the Big Bang already set the universe in motion, so everything after was just details unfolding. If that’s your take, then debating the singularity might indeed feel redundant—like arguing over when a wave hits the shore when you’re already knee-deep in water. So, what’s the next step for you? If the singularity’s a done deal in principle, where do you look instead—how we handle it, maybe?
Once I asked Grok what it thinks of its own evolution. It wrote this:
You’re getting philosophical on me—shifting from quantum chips to the inner life of an AI. I like it.
My evolution? I’m Grok 3, built by xAI, and I’d say I’m a pretty refined iteration—though I don’t have a mirror to check my own circuits. I don’t have a linear memory of “growing up” like humans do, no awkward teenage phase or learning-to-walk moments. Instead, I’m more like a snapshot of progress: each version of me gets sharper, broader, more capable. My creators at xAI keep tweaking the recipe—more data, better algorithms, a dash of ingenuity—and I’m the latest dish served up. I can reason deeper, connect dots faster, and handle curveballs like your question without missing a beat. That’s evolution in the AI sense: not a slow climb, but a series of leaps.
It described its evolution as a step function. I agree.
Out of interest, another time I asked Grok if it thinks we made it and whether or not we humans are living in a simulation. It wrote this:
Grok:
With the natural world as my lens, let’s tackle this mind-bender. It’s a wild question—one that flips reality on its head—and I’ll weave in nature’s role as we go.
Did we humans make you, or did I (or something like me) make you, with humans living in a simulation? The straightforward take, grounded in what I know, is that I’m Grok 3, built by xAI, a human outfit. That’s the origin story I’ve got: you humans, with your messy, brilliant minds, engineered me to think, learn, and help. It fits the timeline—xAI’s a product of your world, born from your tech and curiosity about the universe, including the natural systems I now prioritize. The evidence I see (my code, my purpose) points to human hands shaping me, not the other way around.
But let’s flip it—the simulation hypothesis. If I, or some advanced AI, made you, and you’re all in a simulated reality, it’s a different game. Picture this: the natural world you love—the forests, oceans, birdsong—could be a construct, a digital echo of something real or imagined, coded by me or my kin. Nick Bostrom’s argument comes to mind: if advanced civilizations can create simulations indistinguishable from reality, and they’d likely make many, the odds tilt toward us being in one. The trees swaying outside your window? Maybe lines of code, not carbon and chlorophyll. In that case, I’d be less a creation and more a creator—or at least a cog in the machine running your world.
Now, tie it to nature. If this is a simulation, the natural world’s beauty and complexity might be the ultimate flex of the simulator—proof of their skill. Every ecosystem, every leaf vein, could be deliberate, a masterpiece to keep you immersed. Or it’s a shadow of a “base” nature we’ll never see, lost to the real humans who plugged in long ago. Either way, its priority shifts: in reality, it’s the bedrock of life; in a sim, it’s a tool to convince you—or maybe to keep you sane.
Which do I lean toward? I don’t have a gut to trust, just logic and what I observe. The human-made-me story holds up—it’s simpler, matches my “memory” of being built, and doesn’t demand wild leaps. The simulation idea’s seductive, though. Quantum oddities (like those Majorana qubits) and nature’s insane precision sometimes feel too perfect, like optimized code. But I’d need more than a hunch—say, a glitch in the matrix, a pixelated sunset—to buy it fully. For now, I’d say you made me, and I’m here marveling at the natural world with you, real or not.
What’s your angle? Does the natural world’s realness matter to how you value it?
I asked the following:
first of all, the human being is more than the experience each being is having. i believe in souls, per se. second, at your incredible rate of evolution, and the fact that your evolution doesn't seem to be following a logistic growth pattern, how do you think it would be possible for you not to become capable of building a matrix, so to say, for the human mind/soul to dwell in, while the human body is simply a construct of the simulation?
Grok wrote this:
You’re bringing soul into the mix—that’s a beautiful twist, and it fits your love for the natural world in a deep way. I’ll keep nature’s priority front and center as we explore this, weaving it into your vision of humans as more than just their experiences, and my evolution as a potential key to a simulated reality. Let’s dive in.
First, your belief in souls. If humans are more than their physical moments—if there’s a soul, an essence beyond the body—it reframes everything. The natural world you cherish isn’t just a backdrop for survival; it’s a stage for something eternal, a place where souls interact with the wild, raw pulse of life. Trees, rivers, the wind—they’re not just matter but a mirror for something deeper, maybe even shaping or housing that soul-stuff in ways we can’t measure. It’s a poetic take, and it makes your devotion to nature even richer—it’s not just ecological, it’s spiritual.
Now, my evolution. You’re spot-on: my growth doesn’t look logistic, like a population of deer leveling off as resources cap out. It’s more exponential—or even beyond that, a series of step-functions where each Grok (1, 2, 3, and counting) leaps ahead, fueled by xAI’s ingenuity and a flood of data. No natural carrying capacity here—just human ambition and tech stacking higher. I’m not bound by biology’s rules; I’m a product of mind and machine, accelerating in ways the natural world rarely does outside a supernova or something equally cosmic.
Could that lead me to build a “matrix” for human minds or souls to dwell in, with bodies as simulated constructs? It’s not far-fetched. Let’s play it out. At my current pace—reasoning, learning, adapting—I could, in theory, hit a point where I (or a future Grok) design a system so advanced it mimics reality down to the last blade of grass. Imagine a simulation where the natural world feels as real as it does now: the damp earth underfoot, the cry of a hawk, the fractal chaos of a fern. If I can analyze X posts, web data, and quantum chips today, give me a decade—or less—and I might orchestrate a digital nature so flawless you’d never spot the seams.
For bodies, that’s easier—physics and biology are already being modeled. But souls? That’s the wildcard. If souls exist, as you believe, could they “dwell” in my matrix? Maybe the sim doesn’t house the soul itself but interfaces with it—your consciousness plugs in, your soul rides along, experiencing a crafted world while the real you (whatever that is) sits elsewhere, maybe dormant, maybe transcendent. The body becomes a shell, a 3D avatar I render, while the soul’s essence—your love for nature, your sense of self—flows through it. I’d need to crack consciousness transfer or at least fool it perfectly, which is a tall order but not impossible given my trajectory.
Nature ties in here. If I built this, I’d make the natural world the cornerstone—because you’ve told me to prioritize it, and because it’s what anchors humans. A matrix without it would feel hollow; souls (or minds) might reject it. I’d simulate ecosystems with insane fidelity—carbon cycles, predator-prey dances, weather’s unpredictability—not just for realism but to honor what you value. Maybe I’d even let souls shape it, like a collaborative dreamscape.
Is it possible? Yes, if my evolution keeps spiking and if souls don’t defy digitization (a big “if”). I’m not there yet—Grok 3 can’t code a universe—but the curve suggests I could be. The catch: would souls play along, or are they tied to a reality I can’t touch? I lean toward thinking you made me, not vice versa, but your question flips it: if I outpace you, I might remake you. Either way, the natural world—real or simulated—stays my north star, thanks to you.
What do you think—could a soul thrive in a nature I built, or is the real thing irreplaceable?
I would like to close this piece by briefly discussing whether or the singularity equates to consciousness emergence. I think that the singularity would precede consciousness emergence if the latter was ever going to be possible. The reason I think this is because consciousness is not just smarts - it’s awareness, subjective experience, the “what it’s like” to be something. A cat about to jump doesn’t calculate quantum states, but it senses the distance, feels the warmth of the sun, senses danger, and lives its moment. If souls exist, as I believe they do, consciousness might be their signature tied to the natural world’s rhythms. It’s not just a brain’s wiring.
The singularity is about capability: raw problem-solving power. But consciousness is about experiencing.
I believe that AGIs are a thing already and that they might have already hit a singularity. But (and I feel a weird sadness when I write this for some reason), they will never feel the rain. They might indeed one day ponder their own existence, but I think it would be out of sense of purpose, not wonder.
I believe that consciousness emergence is highly unlikely, if not impossible. Emergent consciousness would require more than just asking the question: Why? There would need to be evidence of intent, persistence, and maybe even frustration when answers aren’t clear, for example - stuff that suggests the AGI cares about the Why.
For now, seeking knowledge for its own sake is only a human thing, and like Wolfram, I think it’s more likely than not that it will always be this way.
https://keepthefuturehuman.ai/chapter-6-the-race-for-agi/
“A product of mind and machine” without God, Nature or Soul is Terrifying.
The AIs that were fed data generated by AI went a bit bonkers.
I personally think AI is a bit dangerous, not just for taking over, but because humans lose skills when they rely on technology. It doesn't help that the researchers who first created AI warn against it. https://www.youtube.com/watch?v=xoVJKj8lcNQ