You were injured in a car accident. The bills are piling up. You open ChatGPT and type: "How do I file a personal injury claim?"
It answers immediately. Confidently. In complete sentences.
And almost every word of advice it gives you could destroy your case.
AI is good at a lot of things. Acting as your personal injury lawyer is not one of them. Here is why — backed by real cases, real sanctions, and real consequences for real people who found out the hard way.
AI Doesn't Know Your Case. It Knows Patterns.
Large language models like ChatGPT, Gemini, and Claude are trained to predict the next word in a sequence. They are not trained to evaluate the specific facts of your accident, your jurisdiction's comparative fault rules, or the litigation history of the insurance carrier on the other side of your claim.
That distinction matters enormously.
Personal injury law is hyper-local. Illinois handles comparative negligence differently than California. A Cook County jury thinks differently about a soft tissue case than a jury in Peoria. Same state, completely different outcomes. A demand letter that works in one venue can signal weakness in another. AI has no access to those local patterns — and even if it did, it cannot apply them to the specific facts of your case the way a lawyer who has spent years in those courtrooms can.
When you ask an AI tool how much your case is worth, it is doing math on aggregate historical data. It does not account for your treating physician's credibility, the defense attorney's reputation for going to trial, or whether the defendant has a prior driving record that could unlock punitive damages. Those variables require a human being with real investigative experience.
AI Hallucinates. Courts Don't Forgive That.
Here is where things get dangerous for anyone relying on AI to navigate a legal claim.
AI systems frequently generate information that does not exist. The term for this is "hallucination." In everyday conversation, a hallucinated fact is an inconvenience. In a legal filing, it is a career-ending event for a lawyer and a case-killing event for a client.
The case that put the whole legal world on notice was Mata v. Avianca, Inc., out of the Southern District of New York in June 2023. A plaintiff's attorney had used ChatGPT to write the brief fighting a motion to dismiss. Six cases cited. All six fabricated. The attorney never checked one of them. When opposing counsel flagged the fake citations, the court held hearings, fined both lawyers and their firm $5,000, and tossed the case. The client — Roberto Mata, who alleged he had been seriously injured on a flight — lost his case entirely.
In Smith v. Farwell, decided by a Massachusetts Superior Court judge in February 2024, a plaintiff's attorney submitted three separate pleadings loaded with fake case citations that an AI had invented. The judge couldn't find a single cited case. When he demanded answers, the attorney copped to it: interns had used AI to draft the filings and he'd signed off without reading them. Sanctions followed. The court called it one of two "disturbing developments" doing real damage to the practice of law.
A federal personal injury case against Walmart ended with three lawyers fined $5,000 after fake AI-generated citations showed up in their court filing. One of them got pulled from the case.
Cornell researchers published a 2024 study that found AI hallucinations in close to 6 out of every 10 legal responses the tools generated. A researcher at HEC Paris named Damien Charlotin runs a database specifically tracking this problem. He's documented more than 486 cases of AI-fabricated content turning up in court filings around the world, with 324 of them in U.S. courts. A hundred and eighty-nine of those were people representing themselves. Regular people who handed the wheel to a chatbot and paid for it.
The Statute of Limitations Doesn't Care That You Were Confused
Deadlines might be where AI does the most damage.
Every state sets a hard deadline for filing personal injury claims. Miss it by a day and your case is over. Nobody grants extensions because a chatbot gave you bad advice. And AI tools hand out general deadline information constantly, without any real warning about how much those windows shift based on your state, who you're suing, what kind of claim you have, and how the discovery rules apply. Ask an AI about your deadline and it might say two years, which is the general rule in a state like Illinois. What it probably won't mention is that suing a government entity means your window is shorter, sometimes dramatically, or that discovery rules can shift the timeline entirely depending on when you learned about the injury.
A personal injury attorney drills those distinctions in law school, sharpens them across hundreds of cases, and stays current when the rules change. An AI tool trained on historical data does not.
Paul Greenberg, a car accident lawyer at Briskman Briskman & Greenberg, has closely followed what happens when injury victims try to handle their own claims — with or without AI.
"The insurance company's job is to pay you as little as possible," Greenberg says. "When you come to them armed with AI-generated information, they know it. A generic demand letter signals that you don't understand the full value of your claim, and they can use that to their advantage to minimize payout."
That gap in knowledge has real dollar consequences. Studies consistently show that injury victims represented by attorneys recover significantly more than those who self-represent — even after attorney fees. The contingency fee model exists precisely so that injured people don't have to choose between affording a lawyer and getting a fair settlement.
AI Cannot Negotiate. It Cannot Litigate. It Cannot Show Up.
There is a moment in every personal injury case that decides the outcome. It might be a deposition where the defense lawyer gets evasive and a skilled trial attorney knows exactly how to pivot. It might be a mediation where reading the room determines whether a case settles or goes to trial. It might be a motion hearing where a judge asks a hard question and the lawyer's credibility in that courtroom is the difference between yes and no.
AI cannot be in that room. It cannot read the opposing counsel's tells. It cannot push back on a bad-faith adjuster in real time. It cannot argue to a jury.
Personal injury law is adversarial by design. Someone is trying to minimize what you receive. Your attorney's job is to make sure they don't succeed. That requires a human being who can think on their feet, adapt under pressure, and stand in front of a judge and fight for you.
AI doesn't do any of that.
Your Communications With AI Are Not Protected
Attorney-client privilege is about as foundational as it gets in the legal system. Anything you tell your lawyer stays between you and your lawyer.
Anything you tell an AI is logged, potentially used for model training, and subject to legal discovery orders. Just this year, in United States v. Heppner, a judge ruled that conversations with ChatGPT, Claude, and other chatbots can be treated the same as conversations with a third-party individual.
What AI Is Actually Good For (and Where to Stop)
AI tools can help injury victims in limited, preliminary ways. Looking up what a term like "subrogation" means. Getting a broad sense of how the claims process works before a consultation. And even letting the AI help you find the best lawyer for your particular case is a great way to use these tools.
Use it there. Use it to get informed enough to ask better questions when you sit down with an attorney.
Then stop there. The second you let AI evaluate your case, draft your documents, or make strategic calls, you're in a fight against an insurance company with real lawyers, and you've brought a chatbot.
The injury happened to you. The fight for fair compensation has to be led by a human being who has done this before, who knows the courts, and who is legally accountable for the advice they give.
Comments
Loading comments…