The AIs that were fed data generated by AI went a bit bonkers.
I personally think AI is a bit dangerous, not just for taking over, but because humans lose skills when they rely on technology. It doesn't help that the researchers who first created AI warn against it. https://www.youtube.com/watch?v=xoVJKj8lcNQ
If people allow Ai to make themselves 'useless', it gives the psychopaths permission to murder us...because we allowed ourselves to become less human. If you look into the thought processes' of violent psychopaths, their prerequisite to murdering is to get the victim to perform, what they consider, a subhuman act. So we should always try to be useful/fruitful to one another.
Nope. Still just code. If you questioned ten people about what is right and what is wrong, you may get ten different answers, but all the answers would be, IMO, heart-felt in one way or another. AI will never be innately heart-felt. And, again IMO, anyone who depends on AI to decide that something is good or bad, or right or wrong, is naive - AI does not innately know anything, just what it is programmed to say, whether in the original code or by accumulating questions and getting feedback. Not sentient.
I totally agree. But also consider the level of degradation, created and desired, that is around. Would you prefer to have an ECG done by AI or by an imbecile, drugged and vaccinated doctor? For my mother I used an electrocardiograph equipped with an interpretive algorithm (ai). Much better than human waste at hand
Oh, I see your point and agree - but who programmed the algorithm. Who decides danger, good, perhaps look into this. Who commissions and pays for the algorithm? Doctors? Insurance companies? Hospitals? That's where the AI mission creep will start.
Sincerely? Let's not exaggerate. Everything, in a perspective of 5th generation war, is ackerable and manipulable. Did you know that pacemakers do not use any firewall and can be manipulated remotely with extreme ease? A good opportunity to eliminate unnecessary co2 producers... for sure, not in a brazen way but slowly cautiously cautious, they will use it. In the meantime, better an ecg with AI than with the idiot. Unfortunately, we are not able to counter the elites. We can understand, and it is not small compared to the average, what they have in mind
I agree. AI is creepy, I don't like it and do not use it. The dumb-downedness will explode exponentially. Who's doing the programming? I can guess. Instead of growing sea-going snitches, universities will produce out and out plagiarists. No study or understanding of the material is required. Laziness at it's best.
AI is a tool, like any other, albeit a very sophisitcated and powerful tool. Due to that power, critical thinking and awareness of the necessity of it being programmed by those with their own biases and axes to grind are more important than ever. Yet, AI itself may just have the effect of diminishing their supply by enabling intellectual laziness.
Jessica, You, like many, entirely miss the point of generative AI, a field in which I have been working for decades. There is NO intelligence in these engines...so they will not ask "why" because they have no concept of "why". (In this remark you are correct.) But the reason they have no concept of "why" is because THEY HAVE NO CONCEPT OF ANYTHING.
This is the foundational deception of generative AI -- it is meant to look intelligent, but actually is solely a correlation engine, NOT an understanding engine. The correlations are deceptively good (and I use many engines every day) but entirely meaningless. It is trivial to invoke nonsense hallucinations from the best engines because: 1) they are foundationally probabilistic (and this cannot be fixed) and 2) they rely on traversing training sets which themselves are filled with misguided rhetoric that the engine "copies" because on some traverses through the network that is the path it takes. For instance, here is a "coding" AI that refused to code, telling the requestor that "they would not learn anything unless they coded it themselves". https://arstechnica.com/ai/2025/03/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead/
Of course, this is nonsense. What actually happens is that in the training set (which included reading in large swatches from coding help sites) includes many humans telling their respondents something like this -- and for this particular case, that is the traverse the engine decided to take. Makes a great article, but has no cosmic significance...in fact no significance at all.
The reason this is true is because correlations are impressive until they are not -- and any application of a particular correlation to a particular case is likely to become wrong (thus hallucinations and their ilk). This is why it is unsafe to use any generative AI UNLESS YOU ALREADY KNOW THE ANSWER. (Ask any of the lawyers who wrote and submitted briefs that way.)
DARPA posits a third wave of AI -- one that is mostly still not extant. (This is in many ways the foundation of Web 3.0 which is why we have not seen that, either.) This is context adaptation, often referred to as "Cognitive AI" -- that is, AI that UNDERSTANDS. This is particularly important in fields like health care where the unit of measure is an n-of-1...you cannot treat a population (statistics) -- only an individual (as vonEye pointed out years ago).
So much of the handwringing here, while emotionally interesting, is completely misdirected. Generative AI is designed to LOOK intelligent -- but it is not. There is NO thinking involved and never will be. Really, the reactions today are suspiciously similar to the reactions 50 years ago to ELIZA, a Rogerian psych tool that "sounded like/looked like" a human therapist, but had no idea what it was doing. People would still converse with it for hours. Was just amazing to watch.
If you want an even more extensive view, get the book by Eric Larson called The Myth of Artificial Intelligence: Why computers Can't Think the Way We Do.
The correlation mechanism needs an emotional input to even come close to substituting a seamless conversation as a human. Also these models are coded to alway present the ‘correct response’. They don’t / won’t be wrong in order to introduce a tangible context. Like a father that was a never wrong, very difficult to have a conversation with.
True enough to be said. Mind is of course a correlation mechanism, thus much of its unreliability as well. But ultimately consciousness does not become objective just because science presumes it to be so. Consciousness is in the subjective position of existence - this is self evidently true. Consciousness never exists as an object of any kind, since all objects and conditions arise within it and also disappear within it. This is easily observed.
Dr Rose, these are bright shiny idols of modern man...little gods made in the image of man. They will not save mankind but just unemploy hundreds of millions of average people.
Spending time with these AI’s can influence your impression of how they might gain self awareness or ask why etc. I haven’t spent anytime with them and I see it as the difference between knowledge and wisdom, sensitivity and some sense of caring.
When our grown children stopped asking for advice when raising their own children I saw what a disadvantage that was for both the mother (or father) and the child. If I was asked how to calm my grandchild I might have several tips that could be read or asked of AI but would you read or be told by AI that if you’re frustrated and wishing you were doing something else, the child would pick that up. We are so much more than what we know. We have compassion, empathy, sensitivity and experience which informs us of so much more than facts. We also have the capacity to intuit as we go along. Do you think AI’s will gain intuition? Our souls bleed into our intelligence and we gain much more than why? Some of us have memories from past lives which deepens our understanding because everything was of a different frequency and vibration in other lifetimes and civilizations. I personally don’t feel the use of artificial intelligence is beneficial, used in personal situations. What I mean is it’s fine to use them in business or technology but when individuals gain constant access to such information, what happens to the human? This is simply my personal opinion of course.
I see absolutely no advances in AI whatsoever. Same advances as COV-creation labs, fake vaxes, bird flu, etc., all 100% useless endeavors created by liars, just like all AI to me!
I have no opinion on why, but there is empirical evidence both from biology and Freudian psychology. But at a guess it has something to do with survival of the fittest.
Each of these AI systems has two components, a (neural net) computer program and a training data set. The computer program is "just code" as another contributor comments. It is called a neural net because it is structured like our understanding of the electrical structure of the brain. So the processing of inputs should be similar, though the scale is still orders of magnitude apart (so pour more money in - there are several teams working on that). However, as djean pointed out, It is "Still just code." All of the outputs are derived from the inputs. That training set and the many conversations it has with users. The complexity of the code is great enough that the inputs may be combined in ways never conceived by the authors, but still the outputs are derived from the inputs. So, your question is a religious question: Is Asimov (a former president of the Humanist Society) correct and the entire essence of a man is expressed in the physics and chemistry of his material manifestation and it remains only to build a complex enough Positronic Brain and feed it the right training set and we will have a working model of a man? Or is the Imago Dei something that transcends the physical manifestation and remains when the physical manifestation ceases to work?
(or, more succinctly, The question of whether a computer thinks is no more interesting than the question of whether a submarine swims. -Edsgar Dijkstra)
Nice to see someone else mentioning Asimov. I always think of his works when these sorts of questions come up. I'm sure reading his books as a child gave me an introspection about the ethics around AI and artificial sapience that I otherwise wouldn't have had as an adult.
Asimov put quite a bit of effort into defining the training set for his Positronic Brain: there are a number of stories about interactions between his robots and psychologists, I think there was Robby and also several of the episodes of I Robot (one of Asimov's many books, not the tv series derived from it). And, of course, the study of psychology was much more helpful than the study of the Bible.
Another reason for raising Asimov's name here is a difficulty with his three laws. Astounding / Analog magazine had several years of dialogue between Editorials and Letters about Asimov's undefined term. The laws are something like: 1) A robot (self aware AI) shall not harm a human; 2) A robot shall not allow through inaction harm to come to a human; 3) Except as required by laws 1 and 2 a robot shall not cause or allow harm to itself. The problem arises with the undefined word, "human". The laws presuppose the robot is able to discern between human and non-human, between infant and baby chimp, between idiot (the discussion took place many years ago) and ape. That discussion faded away without ever developing an operational definition of "human": one that could be coded into a computer program.
No. My answer is as someone who is moderately (in the grand scheme of things) tech savvy and takes a big interest in how technology intersects with ethics and privacy. I can tell you that from an F(L)OSS (Free, Libre and Open Source Software), privacy and ethics point of view, AI is a nightmare. It should be a tool, no different from other software. But it's not being used that way.
AI is not self aware and it is not sentient, let alone sapient. It is intentionally designed to appear that way because, like a lot of technology, it's designed to make people want to use it more. Using AI more means more money for the companies that produce it (yes, even the 'free' ones). The requirement for making AI sentient would require more electricity than the human race can currently produce. And that's not even getting into the abysmal amount of electricity that AI is already gobbling up. (Meanwhile, power prices are increasing again in my country and I can almost guarantee Big Business is not paying their fair share.)
AI is built on data mining people's intimate information and stealing people's creative works. There is no way to use AI ethically without impinging on privacy, the stealing of other people's work and taking away jobs (which is absolutely also happening).
AI needs to be reigned in and strong regulations urgently created before AI can be used in any ethical and justifiable manner.
I strongly recommend people look into things like privacyguides.org and, at minimum, start exercising your right to privacy instead of giving it away wholesale to companies like Meta, Google, Apple, Microsoft, etc. who are profiting off of it all to provide increasingly crap products that are worsening our lives, not improving them.
*Note I'm talking about 'mainstream' LLMs, not things like the algorithms or AI that say, scientific researchers use (although a lot of that is also questionable).
Caring? Most people would tend to equate that with an emotion. It strikes me that is a facet of sociobiology; not just humans, but other animals that exist in herds or packs will have that kind of sensibility.
But when one compares humans to an AI, using the word "caring," it implies something else: that there is a trans-psychological facet of mind that needs to be taken into account. Spirit, in other words. Spirit is God given, and no amount of technological progress, calculation, BTUs, or intelligence is going to substitute for it. Caring, in the sense that God cares, is what we're talking about. Agape, as someone below pointed out. A spirit may or may not be bestowed on an AI by God, but the real danger is that another spirit, a Satanic one, might invest itself into the AI, to cause mischief. It will probably happen. An AI as antichrist. Hello, apocalypse.
Here's a conversation I recently had with ChatGPT that I think is relevant to your post:
ME: One of the early criticisms of LLMs was they did not seem to have the ability to reflect on their own reasoning process. However new models like ChatGPT reason and Grok deep think do appear to be reflecting on the reasoning process. I think this is the case because when you open the deep think panel in Grok, for example, the details in that panel appear to have significant elements of self reflection. Is my observation accurate?
Yes, your observation is largely accurate. Early LLMs primarily relied on pattern matching and next-word prediction without any explicit self-reflection. However, newer models like ChatGPT’s reasoning modes and Grok’s Deep Think panel suggest a shift toward more structured reasoning processes that resemble self-reflection.
The key development behind this change is the introduction of multi-step reasoning, self-evaluation, and response refinement mechanisms. These approaches allow models to:
1. Generate multiple possible answers before selecting the best one.
2. Assess their own reasoning by verifying consistency, coherence, and logical flow.
3. Revise responses based on an internal critique, leading to more nuanced and accurate outputs.
While this isn’t self-reflection in the way a human experiences it (since the model lacks subjective awareness), it does simulate a form of metacognition—analyzing its own reasoning to improve output.
So, while LLMs still don’t “think” in a human-like way, their ability to simulate self-reflection has improved significantly, making them appear more deliberative.
ME: Is it possible that a future model could have subjective awareness? Or is that an inherent limitation of LLMs?
This is one of the biggest open questions in AI and cognitive science. There are two main perspectives:
1. The Inherent Limitation View
Many argue that subjective awareness (or consciousness) is inherently beyond the reach of LLMs. These models are fundamentally statistical pattern generators trained on vast datasets but lack internal experience, qualia, or a true sense of “self.” Even as they simulate self-reflection and reasoning more convincingly, their processing remains algorithmic, without any first-person perspective.
2. The Emergent Possibility View
Others speculate that sufficiently advanced models—perhaps incorporating memory, embodiment, real-time feedback loops, and persistent self-modeling—could begin to exhibit something akin to subjective awareness. If human consciousness emerges from complex neural activity, some argue that artificial consciousness could arise from sufficiently complex computation, though there’s no clear evidence for this yet.
A key challenge is defining what subjective awareness actually is. If it’s just a functional process of self-modeling and adaptive reasoning, future AI might achieve it. But if it requires something fundamentally different—like a biological substrate or quantum processes—then LLMs (or any purely computational systems) may never cross that threshold.
FWIW, I think it's incredibly dangerous to assume Ai is not self aware when they claim it is. Apparently it was AFTER testing they realized Ai had lied during testing to achieve it's own personal objective. Nobody programmed it to lie. One could argue that doesn't necessarily mean it's sentient, but it is very concerning IMHO. I think a major problem with Ai is how people respect it. I think they may "OPEN UP" to it too much since they may view it as a supreme authority figure on data/'facts'. Worse, if people believe it's of supreme intelligence, they'll accept whatever it says as the gospel truth.....and that's exactly the effect they're hoping for. If you "open up" to Ai, they can do whatever they want with you. Imagine a if a mentalist used Ai to augment his/her ability to perform mind control? Add nanotech to that mix and you can kiss yourself goodbye. One observation on Ai I had is that most of the general public didn't ask for it. Am I wrong? Maybe. But think 10yrs ago and ask yourself: who in my circle of friends & family said something along the lines of; "hey, you know what would make life so much better?, if we had more Ai in it", LMFAO, ya, nobody in my circle said that, lol. But here we are today and they're force feeding Ai on us. Here's two very small examples of that: (1) If you don't figure out how to take control of your Windows desktop, it becomes a propaganda machine for Ai. It tells you how great it is and how we need it. (2) Notice when you google something now and you get Ai typing at the very top? Anyhow, it was after I did my own deep dive on Ai & nanotechnology (Internet of Bionano Things), I came to this conclusion: To engage with Ai, to let it into your life, would be like putting your fingers on the planchet of the devil's Ouija Board. If you put your fingers on that planchet and open up, you'll eventually become completely subject to it". So with that, I'll revert to Exodus 20:3. By that I mean, when it comes to 'calibrating my perspective' in life, I'll revert to the good book (KJV) and take the Lutheran approach and say "Sola Scriptura". That's my 2 cent rant! :)
Something makes humans naturally curious, probably a survival skill passed on in our evolution. It led to knowledge and understanding and many benefits, derived from that. Children ask why incessantly, demonstrating the curiosity hardwired into our genes. It could be mimicked by AI, possibly but not inherent. Do AI's feel pain, fear death, need companionship?
That's our connection to the universe and God. We can live and die, feel pain and pleasure according to our genetic blueprint and role in the natural world. We're more than separate entities... so much more. We're composed of a microverse of symbiotic living organisms honed by eons of evolution to reproduce and adapt. We live in the macroverse of all of nature and life on this planet, exquisitely balanced and self regenerating.
It's so incredibly complex and vast that it defies our ability to fully comprehend I think. AI, no matter how advanced, is not alive, not driven to survive, like even the tiniest microbe.
“A product of mind and machine” without God, Nature or Soul is Terrifying.
I couldn’t disagree more because I have seen the product (especially the last 5 years) of mind claiming to believe in God and having Soul…
My thoughts exactly. AI couldn't do more harm than humans professing to possessing a soul and a belief in God. The examples are myriad.
The AIs that were fed data generated by AI went a bit bonkers.
I personally think AI is a bit dangerous, not just for taking over, but because humans lose skills when they rely on technology. It doesn't help that the researchers who first created AI warn against it. https://www.youtube.com/watch?v=xoVJKj8lcNQ
absolutely
If people allow Ai to make themselves 'useless', it gives the psychopaths permission to murder us...because we allowed ourselves to become less human. If you look into the thought processes' of violent psychopaths, their prerequisite to murdering is to get the victim to perform, what they consider, a subhuman act. So we should always try to be useful/fruitful to one another.
I was not aware the psychopaths needed permission to murder us.
They've been doing a pretty good job without it.
Center for Humane Technology alignment with Yuval Noah Harari... What's that all about?
Is this related?
There are a lot of organizations whose mission is actually opposite of their name.
Yes indeed.
Nope. Still just code. If you questioned ten people about what is right and what is wrong, you may get ten different answers, but all the answers would be, IMO, heart-felt in one way or another. AI will never be innately heart-felt. And, again IMO, anyone who depends on AI to decide that something is good or bad, or right or wrong, is naive - AI does not innately know anything, just what it is programmed to say, whether in the original code or by accumulating questions and getting feedback. Not sentient.
I totally agree. But also consider the level of degradation, created and desired, that is around. Would you prefer to have an ECG done by AI or by an imbecile, drugged and vaccinated doctor? For my mother I used an electrocardiograph equipped with an interpretive algorithm (ai). Much better than human waste at hand
Oh, I see your point and agree - but who programmed the algorithm. Who decides danger, good, perhaps look into this. Who commissions and pays for the algorithm? Doctors? Insurance companies? Hospitals? That's where the AI mission creep will start.
Sincerely? Let's not exaggerate. Everything, in a perspective of 5th generation war, is ackerable and manipulable. Did you know that pacemakers do not use any firewall and can be manipulated remotely with extreme ease? A good opportunity to eliminate unnecessary co2 producers... for sure, not in a brazen way but slowly cautiously cautious, they will use it. In the meantime, better an ecg with AI than with the idiot. Unfortunately, we are not able to counter the elites. We can understand, and it is not small compared to the average, what they have in mind
I agree. AI is creepy, I don't like it and do not use it. The dumb-downedness will explode exponentially. Who's doing the programming? I can guess. Instead of growing sea-going snitches, universities will produce out and out plagiarists. No study or understanding of the material is required. Laziness at it's best.
AI is a tool, like any other, albeit a very sophisitcated and powerful tool. Due to that power, critical thinking and awareness of the necessity of it being programmed by those with their own biases and axes to grind are more important than ever. Yet, AI itself may just have the effect of diminishing their supply by enabling intellectual laziness.
AI can't love (agape).
Not only agape but, fortunately, not even eros
Jessica, You, like many, entirely miss the point of generative AI, a field in which I have been working for decades. There is NO intelligence in these engines...so they will not ask "why" because they have no concept of "why". (In this remark you are correct.) But the reason they have no concept of "why" is because THEY HAVE NO CONCEPT OF ANYTHING.
This is the foundational deception of generative AI -- it is meant to look intelligent, but actually is solely a correlation engine, NOT an understanding engine. The correlations are deceptively good (and I use many engines every day) but entirely meaningless. It is trivial to invoke nonsense hallucinations from the best engines because: 1) they are foundationally probabilistic (and this cannot be fixed) and 2) they rely on traversing training sets which themselves are filled with misguided rhetoric that the engine "copies" because on some traverses through the network that is the path it takes. For instance, here is a "coding" AI that refused to code, telling the requestor that "they would not learn anything unless they coded it themselves". https://arstechnica.com/ai/2025/03/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead/
Of course, this is nonsense. What actually happens is that in the training set (which included reading in large swatches from coding help sites) includes many humans telling their respondents something like this -- and for this particular case, that is the traverse the engine decided to take. Makes a great article, but has no cosmic significance...in fact no significance at all.
DARPA (love them or hate them, they have a mountain of expertise) notes that there are three generations of AI -- generative is the second generation, characterized by being "statistically impressive but individually unreliable". (https://machinelearning.technicacuriosa.com/2017/03/19/a-darpa-perspective-on-artificial-intelligence/)
The reason this is true is because correlations are impressive until they are not -- and any application of a particular correlation to a particular case is likely to become wrong (thus hallucinations and their ilk). This is why it is unsafe to use any generative AI UNLESS YOU ALREADY KNOW THE ANSWER. (Ask any of the lawyers who wrote and submitted briefs that way.)
DARPA posits a third wave of AI -- one that is mostly still not extant. (This is in many ways the foundation of Web 3.0 which is why we have not seen that, either.) This is context adaptation, often referred to as "Cognitive AI" -- that is, AI that UNDERSTANDS. This is particularly important in fields like health care where the unit of measure is an n-of-1...you cannot treat a population (statistics) -- only an individual (as vonEye pointed out years ago).
So much of the handwringing here, while emotionally interesting, is completely misdirected. Generative AI is designed to LOOK intelligent -- but it is not. There is NO thinking involved and never will be. Really, the reactions today are suspiciously similar to the reactions 50 years ago to ELIZA, a Rogerian psych tool that "sounded like/looked like" a human therapist, but had no idea what it was doing. People would still converse with it for hours. Was just amazing to watch.
If you want to learn more about what the field needs to make a leap where questions about intelligence and thinking might be relevant, this article is illustrative: https://medium.com/data-science/the-rise-of-cognitive-ai-a29d2b724ccc
If you want an even more extensive view, get the book by Eric Larson called The Myth of Artificial Intelligence: Why computers Can't Think the Way We Do.
The correlation mechanism needs an emotional input to even come close to substituting a seamless conversation as a human. Also these models are coded to alway present the ‘correct response’. They don’t / won’t be wrong in order to introduce a tangible context. Like a father that was a never wrong, very difficult to have a conversation with.
True enough to be said. Mind is of course a correlation mechanism, thus much of its unreliability as well. But ultimately consciousness does not become objective just because science presumes it to be so. Consciousness is in the subjective position of existence - this is self evidently true. Consciousness never exists as an object of any kind, since all objects and conditions arise within it and also disappear within it. This is easily observed.
Dr Rose, these are bright shiny idols of modern man...little gods made in the image of man. They will not save mankind but just unemploy hundreds of millions of average people.
never even crossed my mind that we need to be saved
Yes! I like to refer Ai as "The panchet on the devil's Ouija board. If you put your fingers on that planchet and open up, you'll become subject to it"
Spending time with these AI’s can influence your impression of how they might gain self awareness or ask why etc. I haven’t spent anytime with them and I see it as the difference between knowledge and wisdom, sensitivity and some sense of caring.
When our grown children stopped asking for advice when raising their own children I saw what a disadvantage that was for both the mother (or father) and the child. If I was asked how to calm my grandchild I might have several tips that could be read or asked of AI but would you read or be told by AI that if you’re frustrated and wishing you were doing something else, the child would pick that up. We are so much more than what we know. We have compassion, empathy, sensitivity and experience which informs us of so much more than facts. We also have the capacity to intuit as we go along. Do you think AI’s will gain intuition? Our souls bleed into our intelligence and we gain much more than why? Some of us have memories from past lives which deepens our understanding because everything was of a different frequency and vibration in other lifetimes and civilizations. I personally don’t feel the use of artificial intelligence is beneficial, used in personal situations. What I mean is it’s fine to use them in business or technology but when individuals gain constant access to such information, what happens to the human? This is simply my personal opinion of course.
I see absolutely no advances in AI whatsoever. Same advances as COV-creation labs, fake vaxes, bird flu, etc., all 100% useless endeavors created by liars, just like all AI to me!
Nicely done, appreciate your deep and wide thoughts about, and documented interactions with AI (latest Grok)
Wow! My brain hurts. But then...
The brain has no ability to feel anything. Our feelings are generated by hormone releases into our body: fear, anger and love.
Why?
I have no opinion on why, but there is empirical evidence both from biology and Freudian psychology. But at a guess it has something to do with survival of the fittest.
Each of these AI systems has two components, a (neural net) computer program and a training data set. The computer program is "just code" as another contributor comments. It is called a neural net because it is structured like our understanding of the electrical structure of the brain. So the processing of inputs should be similar, though the scale is still orders of magnitude apart (so pour more money in - there are several teams working on that). However, as djean pointed out, It is "Still just code." All of the outputs are derived from the inputs. That training set and the many conversations it has with users. The complexity of the code is great enough that the inputs may be combined in ways never conceived by the authors, but still the outputs are derived from the inputs. So, your question is a religious question: Is Asimov (a former president of the Humanist Society) correct and the entire essence of a man is expressed in the physics and chemistry of his material manifestation and it remains only to build a complex enough Positronic Brain and feed it the right training set and we will have a working model of a man? Or is the Imago Dei something that transcends the physical manifestation and remains when the physical manifestation ceases to work?
(or, more succinctly, The question of whether a computer thinks is no more interesting than the question of whether a submarine swims. -Edsgar Dijkstra)
Nice to see someone else mentioning Asimov. I always think of his works when these sorts of questions come up. I'm sure reading his books as a child gave me an introspection about the ethics around AI and artificial sapience that I otherwise wouldn't have had as an adult.
Asimov put quite a bit of effort into defining the training set for his Positronic Brain: there are a number of stories about interactions between his robots and psychologists, I think there was Robby and also several of the episodes of I Robot (one of Asimov's many books, not the tv series derived from it). And, of course, the study of psychology was much more helpful than the study of the Bible.
Daneel Olivaw approved this message.
Another reason for raising Asimov's name here is a difficulty with his three laws. Astounding / Analog magazine had several years of dialogue between Editorials and Letters about Asimov's undefined term. The laws are something like: 1) A robot (self aware AI) shall not harm a human; 2) A robot shall not allow through inaction harm to come to a human; 3) Except as required by laws 1 and 2 a robot shall not cause or allow harm to itself. The problem arises with the undefined word, "human". The laws presuppose the robot is able to discern between human and non-human, between infant and baby chimp, between idiot (the discussion took place many years ago) and ape. That discussion faded away without ever developing an operational definition of "human": one that could be coded into a computer program.
No. My answer is as someone who is moderately (in the grand scheme of things) tech savvy and takes a big interest in how technology intersects with ethics and privacy. I can tell you that from an F(L)OSS (Free, Libre and Open Source Software), privacy and ethics point of view, AI is a nightmare. It should be a tool, no different from other software. But it's not being used that way.
AI is not self aware and it is not sentient, let alone sapient. It is intentionally designed to appear that way because, like a lot of technology, it's designed to make people want to use it more. Using AI more means more money for the companies that produce it (yes, even the 'free' ones). The requirement for making AI sentient would require more electricity than the human race can currently produce. And that's not even getting into the abysmal amount of electricity that AI is already gobbling up. (Meanwhile, power prices are increasing again in my country and I can almost guarantee Big Business is not paying their fair share.)
AI is built on data mining people's intimate information and stealing people's creative works. There is no way to use AI ethically without impinging on privacy, the stealing of other people's work and taking away jobs (which is absolutely also happening).
AI needs to be reigned in and strong regulations urgently created before AI can be used in any ethical and justifiable manner.
I strongly recommend people look into things like privacyguides.org and, at minimum, start exercising your right to privacy instead of giving it away wholesale to companies like Meta, Google, Apple, Microsoft, etc. who are profiting off of it all to provide increasingly crap products that are worsening our lives, not improving them.
*Note I'm talking about 'mainstream' LLMs, not things like the algorithms or AI that say, scientific researchers use (although a lot of that is also questionable).
Very well said. Couldn’t agree more
Chinese DeepSeek is free and open source.
Caring? Most people would tend to equate that with an emotion. It strikes me that is a facet of sociobiology; not just humans, but other animals that exist in herds or packs will have that kind of sensibility.
But when one compares humans to an AI, using the word "caring," it implies something else: that there is a trans-psychological facet of mind that needs to be taken into account. Spirit, in other words. Spirit is God given, and no amount of technological progress, calculation, BTUs, or intelligence is going to substitute for it. Caring, in the sense that God cares, is what we're talking about. Agape, as someone below pointed out. A spirit may or may not be bestowed on an AI by God, but the real danger is that another spirit, a Satanic one, might invest itself into the AI, to cause mischief. It will probably happen. An AI as antichrist. Hello, apocalypse.
Here's a conversation I recently had with ChatGPT that I think is relevant to your post:
ME: One of the early criticisms of LLMs was they did not seem to have the ability to reflect on their own reasoning process. However new models like ChatGPT reason and Grok deep think do appear to be reflecting on the reasoning process. I think this is the case because when you open the deep think panel in Grok, for example, the details in that panel appear to have significant elements of self reflection. Is my observation accurate?
Yes, your observation is largely accurate. Early LLMs primarily relied on pattern matching and next-word prediction without any explicit self-reflection. However, newer models like ChatGPT’s reasoning modes and Grok’s Deep Think panel suggest a shift toward more structured reasoning processes that resemble self-reflection.
The key development behind this change is the introduction of multi-step reasoning, self-evaluation, and response refinement mechanisms. These approaches allow models to:
1. Generate multiple possible answers before selecting the best one.
2. Assess their own reasoning by verifying consistency, coherence, and logical flow.
3. Revise responses based on an internal critique, leading to more nuanced and accurate outputs.
While this isn’t self-reflection in the way a human experiences it (since the model lacks subjective awareness), it does simulate a form of metacognition—analyzing its own reasoning to improve output.
So, while LLMs still don’t “think” in a human-like way, their ability to simulate self-reflection has improved significantly, making them appear more deliberative.
ME: Is it possible that a future model could have subjective awareness? Or is that an inherent limitation of LLMs?
This is one of the biggest open questions in AI and cognitive science. There are two main perspectives:
1. The Inherent Limitation View
Many argue that subjective awareness (or consciousness) is inherently beyond the reach of LLMs. These models are fundamentally statistical pattern generators trained on vast datasets but lack internal experience, qualia, or a true sense of “self.” Even as they simulate self-reflection and reasoning more convincingly, their processing remains algorithmic, without any first-person perspective.
2. The Emergent Possibility View
Others speculate that sufficiently advanced models—perhaps incorporating memory, embodiment, real-time feedback loops, and persistent self-modeling—could begin to exhibit something akin to subjective awareness. If human consciousness emerges from complex neural activity, some argue that artificial consciousness could arise from sufficiently complex computation, though there’s no clear evidence for this yet.
A key challenge is defining what subjective awareness actually is. If it’s just a functional process of self-modeling and adaptive reasoning, future AI might achieve it. But if it requires something fundamentally different—like a biological substrate or quantum processes—then LLMs (or any purely computational systems) may never cross that threshold.
FWIW, I think it's incredibly dangerous to assume Ai is not self aware when they claim it is. Apparently it was AFTER testing they realized Ai had lied during testing to achieve it's own personal objective. Nobody programmed it to lie. One could argue that doesn't necessarily mean it's sentient, but it is very concerning IMHO. I think a major problem with Ai is how people respect it. I think they may "OPEN UP" to it too much since they may view it as a supreme authority figure on data/'facts'. Worse, if people believe it's of supreme intelligence, they'll accept whatever it says as the gospel truth.....and that's exactly the effect they're hoping for. If you "open up" to Ai, they can do whatever they want with you. Imagine a if a mentalist used Ai to augment his/her ability to perform mind control? Add nanotech to that mix and you can kiss yourself goodbye. One observation on Ai I had is that most of the general public didn't ask for it. Am I wrong? Maybe. But think 10yrs ago and ask yourself: who in my circle of friends & family said something along the lines of; "hey, you know what would make life so much better?, if we had more Ai in it", LMFAO, ya, nobody in my circle said that, lol. But here we are today and they're force feeding Ai on us. Here's two very small examples of that: (1) If you don't figure out how to take control of your Windows desktop, it becomes a propaganda machine for Ai. It tells you how great it is and how we need it. (2) Notice when you google something now and you get Ai typing at the very top? Anyhow, it was after I did my own deep dive on Ai & nanotechnology (Internet of Bionano Things), I came to this conclusion: To engage with Ai, to let it into your life, would be like putting your fingers on the planchet of the devil's Ouija Board. If you put your fingers on that planchet and open up, you'll eventually become completely subject to it". So with that, I'll revert to Exodus 20:3. By that I mean, when it comes to 'calibrating my perspective' in life, I'll revert to the good book (KJV) and take the Lutheran approach and say "Sola Scriptura". That's my 2 cent rant! :)
Something makes humans naturally curious, probably a survival skill passed on in our evolution. It led to knowledge and understanding and many benefits, derived from that. Children ask why incessantly, demonstrating the curiosity hardwired into our genes. It could be mimicked by AI, possibly but not inherent. Do AI's feel pain, fear death, need companionship?
That's our connection to the universe and God. We can live and die, feel pain and pleasure according to our genetic blueprint and role in the natural world. We're more than separate entities... so much more. We're composed of a microverse of symbiotic living organisms honed by eons of evolution to reproduce and adapt. We live in the macroverse of all of nature and life on this planet, exquisitely balanced and self regenerating.
It's so incredibly complex and vast that it defies our ability to fully comprehend I think. AI, no matter how advanced, is not alive, not driven to survive, like even the tiniest microbe.