In May, a lawyer who was defending their client in a lawsuit against Colombia's biggest airline, Avianca, submitted a legal filing before a court in Manhattan, New York, that listed several previous cases as support for their main argument to continue the lawsuit.
But when the court reviewed the lawyer's citations, it found something curious: Several were entirely fabricated.
The lawyer in question had gotten the help of another attorney who, in scrounging around for legal precedent to cite, utilized the "services" of ChatGPT.
ChatGPT was wrong. So why do so many people believe it's always right?
Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes security evangelist Mark Stockley and Malwarebytes Labs editor-in-chief Anna Brading to discuss the potential consequences of companies and individuals embracing natural language processing tools—like ChatGPT and Google's Bard—as arbiters of truth. Far from being understood simply as chatbots that can produce remarkable mimicries of human speech and dialogue, these tools are becoming sources of truth for countless individuals, while also gaining attraction amongst companies that see artificial intelligence (AI) and large language models (LLM) as the future, no matter what industry they operate in.
The future could look eerily similar to an earlier change in translation services, said Stockley, who witnessed the rapid displacement of human workers in favor of basic AI tools. The tools were far, far cheaper, but the quality of the translations—of the truth, Stockley said—was worse.
"That is an example of exactly this technology coming in and being treated as the arbiter of truth in the sense that there is a cost to how much truth we want."
Tune in today.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
Outro Music: “Good God” by Wowa (unminus.com)