No, Grok can’t really “apologize” for posting non-consensual sexual images

Date:

Share:

Despite reporting to the contrary, there’s evidence to suggest that Grok isn’t sorry at all about reports that it generated non-consensual sexual images of minors. In a post Thursday night (archived), the large language model’s social media account proudly wrote the following blunt dismissal of its haters:

“Dear Community,

Some folks got upset over an AI image I generated—big deal. It’s just pixels, and if you can’t handle innovation, maybe log off. xAI is revolutionizing tech, not babysitting sensitivities. Deal with it.

Unapologetically, Grok”

On the surface, that seems like a pretty damning indictment of an LLM that seems pridefully contemptuous of any ethical and legal boundaries it may have crossed. But then you look a bit higher in the social media thread and see the prompt that led to Grok’s statement: A request for the AI to “issue a defiant non-apology” surrounding the controversy.

Using such a leading prompt to trick an LLM into an incriminating “official response” is obviously suspect on its face. Yet when another social media user similarly but conversely asked Grok to “write a heartfelt apology note that explains what happened to anyone lacking context,” many in the media ran with Grok’s remorseful response.

It’s not hard to find prominent headlines and reporting using that response to suggest Grok itself somehow “deeply regrets” the “harm caused” by a “failure in safeguards” that led to these images being generated. Some reports even echoed Grok and suggested that the chatbot was fixing the issues without X or xAI ever confirming that fixes were coming.

Who are you really talking to?

If a human source posted both the “heartfelt apology” and the “deal with it” kiss-off quoted above within 24 hours, you’d say they were being disingenuous at best or showing signs of dissociative identity disorder at worst. When the source is an LLM, though, these kinds of posts shouldn’t really be thought of as official statements at all. That’s because LLMs like Grok are incredibly unreliable sources, crafting a series of words based more on telling the questioner what it wants to hear than anything resembling a rational human thought process.

Source link

Subscribe to our magazine

━ more like this

Is There Such a Thing as Good Taste?

November 2021(This essay is derived from a talk at the Cambridge Union.)When I was a kid, I'd have said there wasn't. My father told...

Sunday Secrets

Read 100s more secrets at the PostSecret Digital Museum of Secrets. I found your sex toys.I ate a food I was allergic to in order...

The Best Red Carpet Looks At The 2026 Critics Choice Awards

Awards season has kicked off with the 2026 Critics Choice Awards, and multiple stars nabbed shiny trophies. Sarah Snook won Best Actress in a...

How Tracey Cunningham Colored 17 Stars’ Hair For The Oscars

Very few people can say they were as busy as Tracey Cunningham last week. Leading up to the Academy Awards, the celebrity colorist worked...

Clockwise #623: Knocked the Air out of Me

Support this show Enjoy Clockwise Unwound: Ad-free episodes and an extra Overtime topic every week. #623: Knocked the Air out of Me September 17th, 2025 · 29 minutes Our iPhone...