
Ex-NFL player Darron Lee consulted ChatGPT after allegedly killing his girlfriend
Former NFL linebacker Darron Lee is on trial for allegedly killing his girlfriend, and the prosecution just dropped some pretty damning evidence. The Ohio State product apparently consulted ChatGPT on how to cover up the crime instead of calling 9-1-1, which is, like, maybe the most incriminating thing I’ve ever heard in the AI era.
The prosecution read Darron Lee’s ChatGPT prompts in court, and it doesn’t look good:
Ex-NFLer Darron Lee used ChatGPT for advice after allegedly killing his girlfriend, prosecutors say.
— TMZ (@TMZ) March 10, 2026
Details: https://t.co/128ATlJXuS pic.twitter.com/es9IEdcuZY
“Don’t know what to do right now, Fiancée did her crazy thing again, and now she’s messed up, I wake up and she has two swollen eyes (i didn’t do anything, self inflicted). She stabbed herself, slit her eye?”
“Idk, but she isn’t waking up or responding, what do I do?”
Bro… what? She “did her crazy thing again,” has two swollen eyes, and stabbed herself – and you had nothing to do with it?
Oh yeah, right, and I saw the Loch Ness Monster at the Blackhorse Pike Wawa yesterday. Very believable stuff.
All in all, this story is absolutely nuts. This dude was the 20th pick in the 2016 draft and played 5 years in the league. In fact, a writer for this website advocated for the Eagles to sign him back in 2020. Now, he’s standing trial for killing his girlfriend. I don’t know anything about the rest of the case, but this evidence almost certainly guarantees he did it.
Don’t quote me on this, but this might be one of the first instances of someone’s AI history being used as evidence in a murder trial. Sure, we’ve seen Google searches like “best murder weapon” or “how to hide dead wife” put people away, but Lee’s trying to trick ChatGPT into answering the question adds a new level of crazy.
People have been concerned about prosecutors using AI-generated evidence to put innocent people in jail. I hadn’t even considered the idea of a defendant generating evidence using AI to incriminate themselves. The world certainly works in mysterious ways.
Apparently, we don’t know what ChatGPT replied with, and OpenAI has declined to comment for the story, but I can’t imagine it was particularly helpful. Let me be clear, I’m glad a questionably ethical AI model didn’t aid and abet the cover-up of a crime.
As AI permeates every aspect of our already crumbling society, we’re going to see more and more stories that make us go “huh, I didn’t think AI would be involved in this.” Murder trials are just the tip of the iceberg.




Comments (0)