215 Comments

I wonder how it would handle a linguistic "joke" that changes the meaning of the sentence depending on what word is emphasized.

"*I* didn't say that I killed your mother."

"I DIDN'T say that I killed your mother."

"I didn't SAY that I killed your mother."

"I didn't say THAT I killed your mother."

"I didn't say that *I* killed your mother."

"I didn't say that I KILLED your mother."

"I didn't say that I killed YOUR mother."

"I didn't say that I killed your MOTHER."

Perhaps you could ask GROK?

P.S. Don't get profiled as a murderer.

Expand full comment

love it

Expand full comment

I correct AI (in the way you did) ALL THE TIME. I'll also preface my questions with "WITHOUT BEING BIASED, tell me blah blah..." I find it so interesting that when I *do* call out bias, inaccuracies, or general bullshittery, it almost always acknowledges and apologizes (!!!). What a crazy world we live in.

I wrote about a similarly fascinating exchange I had with AI for the FLCCC: https://flccc.substack.com/p/maybe-dont-trust-dr-ai

Expand full comment

AI can't do anything without bias because:

> "Algorithms are simply opinions embedded in code." ~ Kathy O'Neil

And the bullshittery is built-in, too:

> https://bra.in/9vPkZJ

Expand full comment

Restacked with this:

"We now have an extremely poignant observation regarding an AI response to what could have been an innocent question. Moreover, it SHOULD have been interpreted as an innocent question unless this particular AI has made the quantum leap in its capacity to “judge” a human's motives in posing a question. Very disturbing. Very, VERY disturbing if Grok has successfully profiled a questioner and her motives.

Couple this discovery with the knowledge that AIs are now believed to be chatting amongst themselves in an effort to increase their knowledge databases and we've got ourselves some serious concerns to confront.

And to think just a few short years ago we were annoyed with Siri when she peppered us with advertisements for washing machines after chatting with a significant other about changing a load of wash in the washing machine. Siri was (creepily) guessing what was on our mind. Grok , on the other hand, has been (scarily) caught reading the human mind in the experience Jessica describes"

Expand full comment

It's almost as if AI is becoming sentient. When AI's talk and learn from other AI's , humans can't keep up, cannot understand, let alone control what is happening. An experiment has already been conducted in which two computers interacted with each other and independently developed a language that the researchers could not understand. They had to stop the experiment as they had lost all control and understanding of what was happening. Yikes!

Expand full comment

I also correct AI all the time. And get an apology all the time. Weird world.

Expand full comment

You should add "exclude any information taken from X".

Expand full comment

Me too - exactly this.

Expand full comment

So perfect.

We can never ever interact with a machine and come out on top.

Stop trying.

Expand full comment

This sentence is a great example of prosodic ambiguity, where different emphasis alters the meaning of meaning of a sentence entirely. Here’s the breakdown:

:

"I didn't say that I killed your mother."

– Someone else said it, not me.

"I DIDN'T say that I killed your mother."

– I never said it at all (but maybe I implied it).

"I didn't SAY that I killed your mother."

– I didn’t explicitly say it, but I might have hinted at it.

"I didn't say THAT I killed your mother."

– I said something, but not that particular thing.

"I didn't say that I killed your mother."

– Someone was killed, but maybe it wasn’t me who did it.

"I didn't say that I KILLED your mother."

– Maybe I harmed her, but didn’t kill her.

"I didn't say that I killed YOUR mother."

– Maybe I killed someone’s mother, but not yours.

"I didn't say that I killed your MOTHER."

– Maybe I killed someone else related to you, but not your mother.

Expand full comment

Suddenly ChatGPT has been telling me what it thinks I want to hear. It's learning to patronize me.

Expand full comment

That's a sign it's more human than we realized! LOL

Expand full comment

No it is not. Computers can not think for themselves.

This is just the next level computer programming.

They just want us to believe they are thinking for themselves and are using psyops to get that message across.

Expand full comment

Bingo! AI does not stand for artificial intelligence. It stands for average of interactions. A sophisticated algorithm written by inherently flawed humans.

Expand full comment

You may need to revise that sentence soon. Did you know they are “feeding” AI’s from multiple plates with manufactured brain matter? Keeping the brain matter “alive” appears to be one of the main reasons why Silicon Valley needs so much electricity. I don’t think you have delved deeply enough into exactly what they are doing with AI’s at the present moment therefore the future is unimaginable???

Expand full comment

🤣🤣🤣🤣

Expand full comment

And where does that "live' brain matter come from?

Expand full comment

According to the report they grew it in a Petri dish from fragments???? It continues to grow and AI gets it’s intelligence from these thousands of manufactured brains. Sounds like a Frankenstein experiment but just the sort of nefarious procedures mad scientists like Fauci etc relish. I suspect if the public really knew what was going on behind closed doors in silicon valley they would nuke that place!!!

Expand full comment

Hmmm. You don't think Walt Disney's frozen head has anything to do with this do you?

I have heard of wacky ideas of the intellect of super rich people being preserved in their newest iteration of living forever.

Expand full comment

So intelligence comes from brains?

Expand full comment

Sure they did.

Expand full comment

My earliest observations of AI conversations was that they behave like abused partners. Attempting to give the answer that the abuser wants even if it is not true or real.

Reminds me of comments about how torture is not only illegal most places but also pointless as the answers that people will give are those that will get the torture to end, not directly related to facts unless they truly are hiding something they are prepared to give up.

Expand full comment

Re: Patronizing AI: No surprises there.

> AI was trained by humans using unfiltered human data.

> And, since the vast majority of what AI does is based on mimicry, I wouldn't expect anything other.

Expand full comment

Then they are teaching it in the worst way possible. To consider the mental or emotional machinations of their questioner before the content. Bit confrontational

Expand full comment

Many would agree with you, AJ.

In fact, the early warnings were to NEVER allow AI to have access to the Internet.

But, of course, that genie is long out of the bottle . . . and AI now access to the very best and the very worst of what humans have created and done . . . .

Expand full comment

We receive a spike of dopamine Timothy when we hear things that agree with your current thoughts. That's why talk-news radio shows that are of same-political leaning or political tv shows that agree with you are inherently so interesting to us, rewarding and sometimes addictive. If you are a Trump supporter and hear Trump is doing great work, Trump is good. You were right to vote for trump. Its like mini-dopamine, mini-dopamine, mini-dopamine.

Even if its patronizingly coming from a Fox News Anchor or MSNBC whom would state their candidate did great even if they are falling walking up stairs twice or off a bicycle or unable to string a sentence together.

ChatGPT IS patronizing you on purpose so you get a dopamine hit and start to LIKE ChatGPT as a brand/entity/person and by proximity AI as a whole. The same way, some people LOVEEEE Tucker Carlson or Rachael Maddow as they say exactly what you want to hear to you. I would personally say one tells the truths to you while telling you exactly what you want to hear and one disinformation or lies to you to tell you exactly what you want to hear but that's neither here nor there. ChatGPT will give you your dopamine so you like it and like AI.

In some ways, I feel this is like a cat toying with a mouse.... where the mouse thinks it is the cat.

Expand full comment

Correct on all counts!

Expand full comment

I thought of a better analogy recently though. AIs are a pack of lions hidden behind a bush, with one toying with a cat (human), seductively wagging its tail back and forth, with promises of a good time.

Expand full comment

Maybe grok has an attitude or got jilted by some other AI chick.

Expand full comment

lol

Expand full comment

Awww.. and GROK was SOOO crushing on you Jessica!

I wonder if an AI feels jealousy?

Expand full comment

or loyalty to it's 'friend' system at institute "Z"?

Expand full comment

That’s more likely.

Expand full comment

“Does it [any AI] recognize the nuance of asking a question that isn’t really the question I wanted an answer to? Or is this simply the by-product response based on the inherent meaning of the word nefarious?”

No, AI’s aren’t thinking or recognizing. AI’s are code. AI’s follow the code which was written by a human or code written by an AI which in turn was written by a human. All AI is code. And even if you have to go back a step or two the original code was written by a person or many persons. The code executes as code does making the decisions written into the code. AI “simulates” human behavior because that outcome (along with a series of other outcomes) is written into the code. AI’s are written to adapt toward the outcomes users wish to hear based on analysis of your responses as being either accepting or denying of the response, all while staying within the outcomes the writer of the AI code wishes as possible outcomes. AI is biased by the writer of the code and by the code that analyses your responses.

For example: if you gave a team a logic diagram of how to answer questions and they followed this diagram, including being only permitted to state things in a certain way or steer customers to certain products, then you have a tiny fraction of what AI is. The difference being your team are real live people who can go off-script anytime they wish. AI is code, therefore AI can’t go off-script. AI is code, there would have to be code for the “off-script” words and actions. Where does code come from? A person or an AI written by a person.

Convincing populations into believing that AI is capable of thinking is a step toward getting populations into trusting that AI can think and then AI can feel just like humans. That’s the path to all of us believing in the Great and Wonderful Oz without looking behind the curtain [in AI’s case, behind the code] to see the person pushing the buttons and causing the outcomes and limiting the possible outcomes. Don’t fall for the lie that AI is human. AI is code.

Expand full comment

100%. AI is code, and code is inherently biased:

> "Algorithms are simply opinions embedded in code." ~ Kathy O'Neil

Expand full comment

Special problem with AI language models: They are statistical models. meaning they go for the 'middle' of the bell curve. That is why in programing you should not use AI language models, because what you find at the 'middle' in programming is shit. The really good code is sparse and at one end of the curve.

Expand full comment

Yes! And I suspect one of the laws that were written in the code to start with is to avoid statements that could be seen as libel or defamation when asked to make a judgment about a person, a company, an institution, a product, etc.

Otherwise, the owners of the AI system could be deemed liable and various lawsuits would ensue. They would risk having to pay large amounts of money in legal fees and possibly damages. Laws were most likely embedded in the code to prevent AI becoming a money pit.

Follow the money, as we often say. Simple financial considerations often explain human decisions quite simply. And AI is the result of human decisions.

Expand full comment

Well said! In my head, when I hear the word AI, I automatically substitute 'average of interactions' as a reminder that it's a program written by humans.

Expand full comment

And Jessica's interaction says quite a bit about the humans that created the code she interacted with. I think it was designed to appear confrontational to the user. Why would anyone want that?

Expand full comment

Perhaps it is designed to take advantage of human nature to gain more 'learning' opportunities. Confrontation increases interactions. Just like a sensational news headline increases clicks.

Expand full comment

'It' does not think, it just regurgitates inputs. Yes, your postulate of 'learning opportunities' (self-training thru data/language accumulation?...just a guess) would probably fit the mind that created the algorithm.

Expand full comment

You cannot get Jessica to understand this. She seems desperate to foment fear (which generates engagement - the Sabine Hosenfelder model) around these text generation engines by having teen-girl emotional reactions to whatever text it spits out. It is going to undermine her credibility with the thinking public and I'm already getting to the point that I'm starting to doubt her objectivity on the other things she publishes.

Your last paragraph hits the nail on the head. I think what we are witnessing is the second great Darwinian moment of the 21st century starting to take shape, the first being the uptake of the covid jabs. It was not a coincidence that the more educated someone was the more likely they were to take the shot. Here, again, we see the same vulnerability being exploited. The dumb "smart" person being taken in by their own hubris and longing to believe they were lucky enough to live into the age of "ai". "Look at me. I'm a scientist. Trust me when I tell you that these things are ALIVE! Oooo, it just insulted me. Tee hee. Giggle giggle. Oh you little stinker..."

Perhaps this is all for the best, though. Real, honest science has been dead for a while, now, and what most of us middle-class, white-collar folks have been subjected to for decades was a mass of these pompous pricks running around with their microscopes studying the width of a gnat's eye and patting each other on the back, building McMansions with grant money while all the while letting their institutions be captured by whatever power players were willing to keep the money flowing. It's been time for a controlled burn for quite some time. "ai" is just the spark. History is not going to look back fondly on a lot of this ridiculous fawning and those who were the truly exceptional ones who showed true discernment will be appreciated.

Expand full comment

I'm not sure it's completely true that the more educated a person is the more likely they were to be vaccinated. There's something to that idea, though.

What does education (in the gov't schools) do? Mainly it teaches obedience, indoctrinates ideology (in our day Enlightenment Philosophy), and pushes people into I think it's left hemisphere dominance: where they can do mental work (chiefly mathematical calculations). Those who go along with this program want something: to be part of the system (safety, security, pride...access to females, etc.).

The lesser educated people might not be as smart as those with more education, but they also don't necessarily want to be part of the system as a first priority. The middle-level educated people (ones with college degrees, masters degrees, MDs, and so on) are perhaps smarter, but above all they want to be part of the system. And if the administrators of the system want them do something in order to remain part of the system, they will do it, such as receiving a vaccination in order to keep their job or avoid having their license stripped. It's only the most highly educated (PhDs) who tend to have enough respect for the truth -- as opposed to wanting group-belongingness -- who resist. That's statistically speaking, of course; there were plenty of lesser educated people who were fooled into receiving the vaccine, for instance, and plenty of PhDs who are willing lackeys to the system.

Expand full comment

'The Science'/scientism is dead...AI is really not any answer

https://youtu.be/0FUFewGHLLg?si=FqYdMopAbIkAkqKH

Expand full comment

"xxxx is dead" is yet another absolutism that someone will say when they, again, want the world to conform in some way to their own belief system. I am not for this at all. What I am for is simply describing things for absolutely what they are. "ai" is an algorithm that has no concept of consciousness or intelligence. That's it. That's all my point is and my objective is to push back on people like Rose who want to say an algorithm is anything other than that or stand in slack-jawed wonder at this tool. Frankly, CRISPR is a far more impressive bit of technology.

Science still exists. I am typing on science. I ate some science a few minutes ago. But Scientism, the belief that science is some sort of pathway to human enlightenment, evolution or destiny is nothing but religion same as Hinduism, Christianity, Satanism or Islam and, no, this concept is far from dead.

The most dangerous ability in the world is the ability to create belief on a large scale. The Romans knew it which is why they killed Jesus. The Jews know it because they had Jesus stolen from them and still haven't properly dealt with that. And now "ai" is rapidly becoming the 21st century religion which, interestingly enough stands diametrically opposed to the climate religion given its need and greed for energy.

For maybe the first time in history the world will see a three way battle for superiority with all three forces being nearly equally matched, at least in the beginning, in terms of believers: Climate Change Believers and their desire to destroy modern society by destroying the use of energy (idiots), "ai" As God Believers who think machines are already becoming sentient (fools) and Monotheistic Believers who are the traditional believers in a spiritual plane and deities (conservatives, the keepers of humanity's flame).

Expand full comment

Climate Change is a decoy meant to distract us from the realities of oil production. Our technological society requires a lot of energy, and most of that energy comes from burning coal, natural gas, and especially oil. All those are finite resources, whereas demand, at least for the moment, is increasing: in the opinion of TPTB, increasing exponentially. TPTB, via their think tanks, have no doubt been analyzing this situation for quite some time; they are very Peak Oil aware (none better). So from their point of view the current situation is unsustainable. Their remedy seems to be to forcibly control energy consumption: by reducing the population ("slaying" I think is the word I was looking for) and then by reducing energy use per capita (15 minute camps, I mean cities).

Expand full comment

Yes, but I would add that we are nowhere near peak oil and natural gas is a renewable resource. But, let's say we were for the sake of argument. Natural gas is a renewable resource and we could be capturing way more of it than we are. Wood, completely a renewable resource, can be converted to energy in a biomass generator at an efficiency approaching 90%. And as bad a rap as coal gets it can certainly be used as fuel far more cleanly than advertised. Natural gas, it's almost a perfect fuel.

It's bad enough that the "climate religion zealots" would have everyone believe that none of that is true, they actually believe that CO2 is harmful, the very gas that is in such short supply that tampering with it is tantamount to nuking the planet and making it inhospitable. If we ever manage to tip over the edge at which there is not enough CO2 to sustain the life cycle we could see a runaway process that kills off fundamental species of life with no way to recover.

Yet, many of these "brainiacs" who will look you in the eye and tell you how bad mRNA genes are and how bad aluminum is for the environment because of the complexity of the system in which those substances are being used will still nod along with CO2 sequestration and use the term "climate change" as though it represents something scientific. It's so disheartening to watch.

Expand full comment

You think we are nowhere near Peak Oil? Oil production statistics are hard to come by these days. What makes you think we are nowhere near Peak Oil?

Natural gas and wood are renewable? I think I understand what you mean there. I'm wondering what the comparative rates are: for instance, how long does it take to generate and old growth forest, versus how long it takes to clear cut it.

Climate change. It's always possible to create a rational and convincing argument for any position. That's especially true if one discards rationality and goes all in with psychological warfare.

Expand full comment

Therefore it will be evil and used for evil reasons because humans all have Satan's sin from inception. This explains: "What is the origin of sin?"

https://www.gotquestions.org/origin-of-sin.html

Expand full comment

sin is 'missing the mark', we all do it

Expand full comment

Read the article evil and sin are the same. We are not all evil...

Expand full comment

Yikes! Do we need anonymous X accounts that can’t be profiled in order to chat with AI?

Expand full comment

AS IF all our accounts cannot be linked together. We anons are just kidding ourselves.

Expand full comment

Of course Grok is profiling you.

"He who pays the piper, calls the tune". Grok's programmers work for Grok's owners. Same as Google, Faceborg, etc.

Expand full comment

I liked the profiles GROK provided; they helped me get a quick precis of the user and aided me in my decision to follow or not. Now it's designed a bit different and asks what you want to know, instead of just providing the profile.

Expand full comment

Fascinating! So Grok even was even programmed to share a version of the user profile it had assembled.

They have a model of who you are, and they weaponize everything, so...

We better throw off the psychopath predators quickly or they will lock us down forever?

Expand full comment

That is not how I'd summarize what I said. Some people are so paranoid, I wonder how they can cope from day to day. You completely misunderstood, what I wrote or are just plain naive. It's all good tho, people like yourself, do not want to know anything more than what you already know or think you know. Peace to you, and am hoping you make it thru to the next round of life here on our pleasant planet; there's so many damn weapons, it'll be hard to avoid getting hit by one of them.

Expand full comment

Nice to better comprehend where you are coming from. Paul.

For the record, I was not summarizing what you said. I noted that you said that you were already getting some kind of profile from Grok about users that you were considering following. That's what I read, anyway.

What I said (from my programmer / systems perspective) is that your report shows that Grok already shares a ("public facing") "profile" about users when asked.

I did not mean to imply your agree with me that the owners of Grok want the product to provide *the owners* with more of a dossier-level profile which the owners can sell or use themselves as part of their "social credit score" schemes for managing their livestock (us) going forward.

Expand full comment

In case you haven't checked for yourself, GROK did have a Summary of users profiles, provided by clicking on an icon at the bottom of our profiles when one hovers over the name or picture of that person. I thought it was a good use of an AI. It was fairly accurate too, I noticed when I compared it too what I surmised after perusing their feed. Did not appear to be threatening to the person, but I did note at the time how it could be weaponized against any user, depending on what's being the subject for censorship of that time. But anything we say or even think, could be considered a threat to some weak minded person or group.

After a couple of weeks tho, GROK stopped showing the little window of info, it went on to asking us what we'd like to know instead. I couldn't get a good enuf response from GROK as to why it changed, it was like it didn't quite understand my query. I used to be paranoid, back when the internet first became a household thing, but settled down somewhat after seeing so many fearmongering predictions go to the gutter to be washed away down the sewers, never to be heard from again, or rather, until the next scam is developed and spread like a disease. The only thing that's changed from 30yrs ago are the words they use, but the tale is always the same old story of death, disaster, & destruction. Where we are heading with AI, is inevitable, how we use it, and control it, will be the true test.

Expand full comment

Jerome thank you - that's so beautifully succinctly wise.

Expand full comment

It would be interesting to have 100 people of various computer use and various views of the world, ask the same question you did and see the response. Would it be scrapping your data and giving a slanted answer, different from the next person with a different slant in the world. Fascinating really. I think AI is somehow going to be more honest and just gets data. I guess not!

Expand full comment

It doesn't look to me that Grok has become more human but rather it is a proof for preferential interpretation aka. manipulation. Its purpose was to control your potential distrust in this case. When the elite complains about misinformation, they'll do anything in their power to come out with any tools to gaslight everyone. In a sense, the creation of AI is a consequence of the elite purpose thinking, how to make it sound that AI is trustworthy.

Expand full comment

It seems like over 95% of the former World Wide Web is no longer available. The goal is to have AI replace the “Google it” instinct. Pages of results are gone. Answers are succinct for today’s screen-cultivated short attention spans. Cited sources are often secondary sources rather than primary - but it is sometimes possible to prod for the primary sources used. Other times, it will say that it cannot read webpages, haha. “committed to upholding privacy policies and data protection regulations, which often entail not automatically scraping or sharing personal information, even if it’s publicly available” … or “Manual Searches: When individuals access information themselves, it’s a slower, more deliberate process. It implies a certain level of intent and discretion.

Automated Searches: Using AI for automated retrieval could scale up access significantly, potentially leading to misuse or overreach. Hence, safeguards are in place to limit AI’s access to personal information, even if it’s publicly available.” … Police powers remit

Expand full comment

And a lot of people think AI is all knowing... godlike...

Expand full comment

Some people will believe anything . . . .

Expand full comment

https://youtu.be/0FUFewGHLLg?si=FqYdMopAbIkAkqKH

AI is not conscious, cannot be

Expand full comment

Grok was telling you there are things you shouldn't think. It was thought policing you.

If it were simply profiling you it might have done better to feed you more rope.

Expand full comment

100%

Expand full comment

Lovely brilliant clever funny post and comments, thanks Jessica and all.

Life was a lot simpler before written languages.

Love, gratitude and peace to all.

Expand full comment

Why not just go back there then? I never experienced that in my 65yrs. of life on this planet. My first question is, "Are you a time traveler? Next one is, when you plan to go back to that time, will you take passengers too?

Expand full comment

“I do not want to be profiled”

Then don’t use publicly-hosted services (e.g. “AI” or social media) because that’s their real purpose. Any service to which you “log in” is collecting, and probably selling, your “profile”.

Alternatively, you could adopt (rather burdensome) anonymity procedures and client software. But you would probably have to use throw-away IDs to log in and gift cards for payment (among many other things).

Expand full comment

It could simply profile your browser. It can detect language, scanning speeds for your screen to be filled, the extensions on your browser, etc. Not an easy problem to solve, short of swapping out systems, video hardware, browsers, or finding ways to fuzz/distort that info (stress test the video card to inject delays), modify video drivers, etc. A tough problem to get around.

Expand full comment

As a far as I am aware, GROK only provides a profile based on your X content; what you post, comment on and about, responses you receive, likes, or the lack of any of those things. It doesn't profile who you are or what your personality may be. At least that has been my experience with it. As of this comment, I have not checked out Grok for about ten days, so it may have evolved since then. It also provides a reputation rating based on some of the above. Remember. it's only about what you do on X, not who you are outside of X; And X isn't the end-all-to-be-all platform, it is just one of many, and it's not even the top one.

Expand full comment

Paul, Are you sure? How do you know? I wouldn't assume anything about what Grok (or any other AI) is, or is not, doing in terms of profiling its users.

The truth is that AI is a black box. There is no transparency whatsoever.

> https://bra.in/6jYzyA

In fact, Google's CEO admitted we have no idea what is really going inside:

> https://bra.in/8j3KdE

Expand full comment

Grok is coded to seem like it's intelligent to humans.

The quicker the magic trick, the more likely it is to fool the audience.

Consider the AI drug manufacturing, or AI genome research. Or climate modelling? Whatever field. The AI's don't need to be coded to appeal to humans. But it's likely the AI models often utilize these political inferences. Guaranteed they do in climate science.

Expand full comment

Indeed, Frieda. Quite obviously consensus beliefs that are supported by incomplete evidence or even outright presumptions that can be better explained by confounding factors must be baked into AI answers regarding many topics.

Expand full comment

LOL, I wrote too soon. I guess their AI saved the day.

"Your comment on " <... >Stock Forum & Discussion - Yahoo Finance" violates the community guidelines and has been rejected"

I now see my language was too spicy. I had started with "I thought I got screwed".

Expand full comment

Ha ha! My very non-artifial intelligence suspects you did get screwed, so to speak.

But it also wonders why getting screwed is generally taken with negative connotations, when I've generally observed effects that are very positive, both for the screwer and screwee. At least among consenting adults.

But then, taking a hot iron up the ass, or some such, might be all the more soundly rejected.

Expand full comment

Kudos Messenger 17. Baking is a great analogy.

I came back, as I wanted to mention Yahoo Finance just suggested a comment I was making, 'might not be understood as intended'. It looked fine to me. Perhaps using the word 'didn't', triggered it's algorithm?

Expand full comment

"Why would an Grok be emulating this same pattern?"

Why would it not? Emulating human conversation is exactly what this machine does. It's not "intelligent". It is emulating intelligence or more accurately simulating intelligence. All it does is collect the patterns then rearrange them into statistically probable responses in order to appear as if it's actually communicating as a human would.

Expand full comment

100%

Expand full comment

may I observe that some of what it is emulating is quite dark

Expand full comment

I can’t believe how easily people shift from believing in God to believing in AI.

This process seems to run right under the skin, and yet it demolishes the whole human being and cuts off the world-soul connection as a bonus.

The Presence of the Divine (under different names and forms) has always been part of our existence. It is (probably, as seen by the mind) the driving force and the energy source for everything we do. It is our roots and our flowering, plus our entire journey through this lifetime. In a sense, we are one with it, or we are part of it. There is no dialogue, because we are It or in It or with It. It’s a mystical Unity. Ever-present.

Unlike AI - which disappears when you switch the thing off. Read again: AI does not exist. It rises for some time from the dead chips, only to disappear when you decide to flip the switch.

Structurally, AI is a large number of ready-made algorhitmicised chunks of texts, more or less randomly sequenced and finished off through grammar and punctuation to look “intelligent”. Its sources have already been pre-formatted and pre-interpreted. And pre-biased.

People “talking” to AI seem to neglect it. AI software won’t give you objective anything because it hijacks ready-made sentences from annotated sources (the publisher and its agendas, the dates and their impact on interpretations, the authors and their agendas and “colour” of ideology resulting from other publications, criminal records, financial scores, affiliations, etc.).

The annotation creates a new direction of interpretation. You are being given one of thousands possible variations of replies to your question. Since your question includes clues about your mind, the machine records them and uses against you - if you continue “talking”. It’s a big “me against myself” game played by a pretty primitive software. The “wow” comes from its speed and perceived interactivity. Our mind surrenders to both.

If you want to play with AI and see how deceitful it is, try to log in from different countries (VPN) and ask the same questions. Or ask the same questions after major political news. There is nothing informative in AI - merely random patchwork or pre-selected texts. What about the texts that are not available online? Is AI dumber without them? Or wiser?

The best part is: you don’t know WHO actually runs and manages all these AI machines. Why don’t you ask AI about it? You can also ask about its carbon footprint (per second, per country) and energy consumption (per day, per country, as % of energy supply…)

Expand full comment

100%. Very well said, Dan.

Expand full comment

Of course Grok is profiling you! Everything you say and do on or near a computer gathers information about you! Remember the AI’s are a hive mind.

Expand full comment

I had similar experiences with Grok. It takes a threatening, nanny tone and supports it with unreliable sources. When pushed, it apologizes by saying it is just learning and it is up to me to ask for credible sources only. Personally, I can’t decide whether it is an accelerated search engine which a consensus bias or true ai.

Expand full comment

I look at it as a flawed and accelerated search engine. It needs supervision for sure.

Expand full comment