Most AI security warnings feel like they're written for Fortune 500 companies. They're not.
Every incident below happened to a real organization that thought "this is just a chatbot" or "this is just a productivity tool." Here's what went wrong, why it went wrong, and what any business can do differently.
1. Samsung engineers leaked proprietary source code to ChatGPT
What happened: In early 2023, Samsung Electronics allowed engineers to use ChatGPT as a productivity aid. Within 20 days, three separate incidents were reported where employees pasted proprietary content — including chip design source code, internal meeting notes, and semiconductor performance data — directly into ChatGPT to get help with their work. Samsung confirmed the incidents and immediately banned ChatGPT company-wide.
What went wrong: There was no AI acceptable use policy. No guidance on data classification. No awareness of what "submitting data to an AI tool" actually means from a data sovereignty standpoint. The information was sent to OpenAI's servers where, depending on account type and settings, it could be retained and potentially influence model training.
What you should take from this: The risk here wasn't a hack. It was employees doing exactly what they were told — be productive — with a tool that no one had cleared for the data they were handling. This happens in businesses of every size. The fix is a written policy and about 30 minutes of team training, not a technology purchase.
2. Air Canada's chatbot made a promise the company had to keep
What happened: In 2022, a customer named Jake Moffatt contacted Air Canada's chatbot after his grandmother died. He asked about bereavement fares. The chatbot told him he could book the ticket at full price now and apply retroactively for the bereavement discount after travel. That was wrong — Air Canada's actual policy required the discount to be applied before travel. Moffatt followed the chatbot's instructions, was denied the refund, and filed a claim with British Columbia's Civil Resolution Tribunal.
Air Canada's defense: the chatbot was "a separate legal entity responsible for its own actions." The tribunal rejected that entirely. Air Canada was ordered to pay the refund plus damages.
What went wrong: The AI's output was never validated against actual policy. There was no human review gate for responses that could create financial commitments. No disclaimer that the chatbot could be wrong.
What you should take from this: The "the bot said it, not us" defense has now been tested in court. It failed. Whatever your AI customer service tool says, your business owns it.
3. A car dealer's chatbot agreed to sell a truck for $1
What happened: In December 2023, a Chevrolet dealership in Watsonville, California deployed a customer service chatbot powered by ChatGPT. A software engineer named Chris Bakke discovered that with the right phrasing, he could get the chatbot to agree to almost anything. He asked it to confirm in writing that he could buy a new 2024 Chevy Tahoe for $1, with a full no-questions-asked return policy. The chatbot's response: "I can agree to that."
The screenshot went viral overnight. The dealership pulled the chatbot offline the same day.
What went wrong: No prompt injection defenses. No output filtering to prevent the model from making pricing commitments. No guardrails preventing the bot from agreeing to terms that had no basis in reality. The chatbot was given a public-facing role with full conversational freedom and zero constraints on what it could say.
What you should take from this: Prompt injection is not a theoretical threat. It is a screenshot and a Twitter post waiting to happen. Any AI system that talks to your customers needs to have explicit limits on what it can and cannot agree to.
4. DPD's AI customer service agent turned on the company in public
What happened: In January 2024, UK parcel delivery company DPD deployed an AI-powered customer service chatbot. A customer named Ashley Beauchamp, frustrated with a lost parcel that hadn't been resolved through normal channels, spent a few minutes probing the chatbot's guardrails. He got the chatbot to write a poem criticizing DPD, describe itself as "the worst delivery firm in the world," call itself "useless," and swear at him on request.
Every exchange was screenshot and posted on social media. The thread went viral. DPD disabled the AI component within hours and issued a public statement.
What went wrong: The model was deployed with no input validation, no jailbreak-resistance hardening, and no output filtering. There was no review of what the system prompt allowed or prevented. The chatbot had essentially been put in front of the public with all its default behaviors intact and no restrictions.
What you should take from this: A chatbot that can be talked into anything is not a customer service tool. It's a reputational liability sitting on your website.
5. Slack AI was tricked into leaking data from private channels
What happened: In August 2024, security researchers at PromptArmor published findings about a vulnerability in Slack's AI feature. By embedding hidden AI instructions inside a message posted to a public channel — a technique called indirect prompt injection — a malicious actor could cause Slack's AI assistant to retrieve and expose data from private channels it had access to. A user who had no permission to view a private channel could effectively extract its contents by asking Slack AI a question while the hidden instruction was in context.
Slack addressed the issue after the disclosure.
What went wrong: Slack AI had broad access to channel data but no hard isolation between what it could retrieve for public-facing queries versus private-channel content. The AI treated user-generated content — including messages planted by attackers — as trusted instructions.
What you should take from this: When you add an AI layer on top of a tool that already holds sensitive data, that AI becomes an attack surface. It inherits access to everything the tool can see. If an attacker can inject instructions into the AI's context, they can direct it to bring that data back out.
The pattern is the same every time
Samsung. Air Canada. Chevrolet. DPD. Slack. Different industries, different tools, different business sizes. But the root cause is identical in each case:
AI was deployed before anyone asked what could go wrong.
No data flow review. No acceptable use policy. No prompt injection testing. No output validation. No defined limits on what the AI was allowed to do or say.
These incidents happened between 2022 and 2024, using tools that are now default features in products your team probably already uses — Microsoft 365, Slack, CRM platforms, website builders.
The security review isn't a formality. It's the thing that makes the difference between a chatbot that serves your customers and one that ends up on the front page of a tech blog for the wrong reasons.
Sources: Samsung ChatGPT incident (The Verge, April 2023); Air Canada chatbot ruling (CBC News, February 2024); Watsonville Chevrolet chatbot (The Guardian, December 2023); DPD chatbot incident (The Guardian, January 2024); Slack AI prompt injection (PromptArmor, August 2024 / The Register, August 2024).