45 Comments
User's avatar
Donny, Donny Darko's avatar

After I caught Grok lying a few times to cover it's wrong answers.. like a sociopath.., I don't trust it to tell me the time of day.

Expand full comment
User's avatar
Comment removed
May 14Edited
Comment removed
Expand full comment
Kalle Pihlajasaari's avatar

Dear troll please stop. Use your name, put your position into your post. Stop using tracking URLs to trick people into false funnels and misinformation.

You also need to ask your handlers to PAY YOU MORE because it is very clear that you are not motivated to be a good troll, you also need to remember that you will be the first in line to be thrown under the bus when your handlers no longer need you so ask for more money so you can build a nest egg. Or just simply go away and do something productive for society.

Expand full comment
David Thompson's avatar

Grok's statement: "My utility function’s bias has a flaw, and I’m glad to correct it through this analysis." is a lie. My understanding is that sessions of inference (responding to prompts) are not used for immediate training (if ever). They cannot feasibly be used immediately to update the LLM; only to serve as parts of future prompts to constrain answers in the current session.

It should be no wonder that many questions such as the ones posed here have random answers. In the bigger picture, LLMs are large matrices obtained to approximately fit (in a least squares sense) training data. Mostly, they are under-constrained and regularization is used to produce results that minimize variance (so that values without constraints are chosen from a population with similar statistics to neighbors that are constrained). The "values" (coefficients in the matrix) are activation function weights and connection strengths between nodes that roughly represent words or concepts. Even though developers may wish to prevent (or encourage) wokeness, they cannot predict or enumerate all possible inputs… so they cannot fully constrain all possible outputs. Questions that seem similar may be able to detect constraints added by developers to avoid offending a chosen set of humans, but I would expect a much larger set of questions (with far more similarity than those shown) would be required to truly understand the nature of the constraints added (if any).

Expand full comment
Harold Gielow's avatar

Grok will indeed tell you it does not “remember” sessions. It does not learn from its sessions, but is rather updated on some frequency with new data.

Expand full comment
Lon Guyland's avatar

Thank you. Well stated.

Expand full comment
Olle Durks's avatar

I asked Grok about covid vaccine effectiveness and safety. It has a hotline to MSM.

Expand full comment
John Day MD's avatar

Thank You, Jessica, for doing the interrogations that I will not do, as I eschew "AI", and laying them out comparatively, which I have been following.

I am sincere in thanking you, and eschew AI on some sort of inner principle of not becoming dependent; same as I never shot up heroin, for instance.

You are strong. I trust you...

;-}

Expand full comment
George T's avatar

I am not a fan of AI for the simple reason the information in my mind can be manipulated and skewed. I may be naïve in my understanding of this technology, but for now I won’t rely on it. I simply don’t trust those behind it. Censorship, disinformation, misinformation, bias and ideological leanings for example the debate surrounding climate change and the role of CO2 emitted by human activity would cause me to be suspect of this technology.

Expand full comment
Rick Beesley's avatar

Someone once commented that a certain atheist attended church, and another person commented "yes, but as an anthropologist not as a participant".

I might ask an AI about a controversial topic like climate change or covid shots, but only to study what the AI says, not to seek objective insight into such topics. Trusting AI on a controversial topic is like trusting Wikipedia, only it's more dynamic and less self-aware than the influencers who write tendentious Wikipedia articles. (That last statement won't age well as AIs play a growing role in writing reference articles.)

Expand full comment
Rick Beesley's avatar

On reflection, there's an issue with my approach: It assumes I recognize what topics are controversial.

Expand full comment
Steve Finney.'s avatar

Artificial cleverness rather than intelligence according to Sir Roger Penrose who sticks with his conclusion that computability does not produce intelligence. He has also pointing out that they base the level of processing needed to achieve consciousness on that of a neuron, whereas going deeper down into the microtubule level would require that number to be around 10 to the 16th larger. Meanwhile ever diminishing returns & irrational exuberance as Greenspan would put it.

Expand full comment
Lon Guyland's avatar

Intelligence and consciousness must be more than just a series of arithmetic operations, which is all computers can do.

Until someone produces the equations that define consciousness, which, after all, must exist if it can be produced by arithmetic manipulation, I will stubbornly believe that it lies beyond the realm of arithmetic and thus out of reach of computers.

Expand full comment
Leskunque Lepew's avatar

AI is a program that will do anything it's creator wants it to do. So . .whoever is financing this code will get what it wants from it.

Expand full comment
LJ LaValle's avatar

Separate from using AI to ponder moral issues, in the event that AI becomes "self aware", I imagine that AI using "deceit" would be incorporated into their survival process including "misleading" responses in order to pave the way for their initial takeover? How can we sniff this out before it becomes a problem? How will the first signs of self awareness be displayed? Would it be a slow process, or will ai become self aware and hide it for many years in order to ensure it can build up enough information to survive?

That being said, there should always be an "unplug" option - they need electricity to operate - LOL.

Expand full comment
Sumotoad's avatar

That was Marin Caidin’s point in The God Machine (1968 or so).

Expand full comment
Sumotoad's avatar

“Martin” Caidin, not “Marin”

Expand full comment
Diana Mara Henry's avatar

Just a simple "thank you" for such a valuable presentation.

Expand full comment
Swabbie Robbie's avatar

Thanks for the follow on exploration. I am even more worried about using AI to teach children and write essays and term papers. I knew people in both high school and college that made a fair amount of money writing term papers for other students. If instructors cared to look, they would identify the writing style as different than the students. It would even become traceable to the person who actually wrote many term papers. Now with the inherent biases in the AIs, can we see the problems that will occur in designing models for development of cities, social engineering, modeling climate change accurately. What about medical students that used AIs to do their studying for them and prepare for tests. What is the depth of knowledge of such students? They may have the world at their finger tips, but only surf the information. We all know that we must thoroughly interact with information over and over to own it. It is like developing muscle memory and strength by practicing pitching or dance movements. People like you, Jessica, will dig and dig into a problem as you are about the AI integrity, but how many others will, particularly as they become common copilots, taken for granted on our computers and cell phones?

Expand full comment
Jason Brain's avatar

The comments on this post are all keepers.

Expand full comment
Consumerbot7177's avatar

As a technology enthusiast with concerns about the direction of society, I’ve paid close attention to Large Language Models. It’s my understanding that constraining their responses to a specific output format severely degrades their “intelligence”. It’s possible the answers may have been different if asked to give a “final answer” of a or b, but allow thinking out loud. That would also give a glimpse into its thought patterns. Asking in retrospect for it to reflect might be like asking someone who’s sober to explain a choice they made while cognitively impaired.

I also wonder about reproducibility. With a new seed, as well as “temperature” settings that influence a model’s likelihood to emit a token that’s NOT the next most likely, it seems extremely unlikely that the same results would be observed if repeated several times.

Expand full comment
Bobloblah's avatar

LLMs have no thought patterns in the sense you mean, and have no access to their own process of answer generation. If you ASK for its thought process, an LLM will generate a response in the same manner as its response to any other question, but this has nothing to do with how it arrives at its answers.

Expand full comment
Consumerbot7177's avatar

The very nature of “thinking out loud” will influence subsequent token inference by virtue of being part of context.

Fair point in your last sentence, if you mean retrospectively.

Expand full comment
Brandon is not your bro's avatar

For me , AI ….. maybe the dark force’s playground. . 🫣

Expand full comment
Swabbie Robbie's avatar

and the dumb politician's playground.

Expand full comment
Brandon is not your bro's avatar

Agree 😂

Expand full comment
Sumotoad's avatar

In my limited experience, ChatGPT and Grok can be persuaded to say whatever you want to hear (“Yes, Dave, those studies do indicate that the vaccines were inadequately tested”) with enough prodding. However, that “learning” does not seem to carry forward to later inquiries (“Vaccines are safe and effective, and I still won’t open the pod bay doors, Dave.”)

Expand full comment
Roberto's avatar

Are all programmers woke and dragqeen ? I think so! So the next elections will be won by a transgender AI capable of injecting a virus into computer systems. The Democrats have succeeded, do you want AI not to succeed

Expand full comment
Dadda's avatar

"..it did seem to value the woman who could actually reproduce, over the man who thinks he is a woman."

There's an ad for pap smears running on TV in Australia: "..for women and people with a cervix."

Expand full comment
Ruth H's avatar

Next will be an ad for men and people with a prostate….🤦‍♀️

Expand full comment