The American Bar Association monitors trends in legal malpractice claims. It may surprise nobody that one of the biggest things being seen in legal malpractice (and the one most heavily anticipated to rise in the future) revolves around the use of artificial intelligence (AI) by lawyers.
What can go wrong when lawyers rely on AI to do their research or cut client costs? Plenty. Here are some possibilities:
Errors in output, biases, over-reliance and confidentiality breaches are a concern
AI tools can function like high-tech search engines, providing aggregate results from information found all over the internet. When used to predict case outcomes or recommend legal strategies, they can show what may have been done in the past – but that doesn’t mean their recommendations are without flaws. Attorneys who rely on the output to make decisions in their cases could be acting on faulty data.
AI can also be biased. Human beings fed (and continue to feed) data in AI programs, and humans are prone to biases that skew data around race, gender and all kinds of prejudices. That can lead to misrepresentations and discriminatory results that negatively affect case strategies and client representation.
Over-reliance, too, can be a problem. When attorneys get overconfident about their AI-generated results, they may fail to verify information or cross-check recommendations and paperwork, resulting in missed deadlines, serious legal mistakes and suboptimal outcomes for their clients.
Finally, confidentiality breaches can be a serious issue. Improperly secured AI systems can be vulnerable to cyberattacks, and that can lead to unauthorized access to sensitive and confidential client information.
Without proper training, attorneys can misuse any tool they are given – including AI. Attorneys may get so excited about AI’s capabilities that they overlook the risks to their clients. When legal mistakes put clients at a disadvantage and harm them, it’s time to seek new legal guidance.