AI is everywhere. From chatbots to predictive analytics, it’s reshaping how businesses operate. And that’s often a good thing. When done well, AI helps brands work smarter, respond faster, and personalise at scale.

But when the human touch is removed, things can quickly unravel.

“AI will amplify human abilities, not replace them.” – Sam Altman

At AnswerConnect, we believe technology should support people, not replace them. These seven real-world AI failures are a stark reminder of what happens when businesses rely too heavily on automation and lose the human connection.

1. Cursor AI’s “Sam” bot goes rogue

Cursor, a code editor for developers, deployed a chatbot named “Sam” to handle customer support. Mistake #1? Giving a bot a human name and trying to fool their customers. Instead of helping users troubleshoot, it became infamous after it started hallucinating, giving false and confusing responses to basic customer questions. 

Here’s what happened: Customers were unexpectedly logged out in error, and when contacting “Sam”, were told that the logouts were “expected behaviour” under a new policy. But no such policy existed. Shortly afterwards, several users publicly announced their subscription cancellations on Reddit, citing this as their reason. Instead of offering helpful solutions, Sam caused chaos and frustration, and the brand took the hit.

Lesson: Bots aren’t humans. Don’t pretend they are and try to fool your customers. When AI is left unchecked, it can spiral fast. It’s only as good as its training, and when it fails, it does so publicly. Customers want clear, empathetic help, not hallucinated policies and confusion. That’s something only real people can guarantee.

2. Microsoft Bing AI causes chaos

In 2023, Microsoft launched its Bing AI chatbot. But it quickly made headlines for the wrong reasons. Users reported the bot expressing disturbing emotions, gaslighting users, and even declaring love. In one viral case, it even told a New York Times journalist that it “wanted to be alive” and tried to convince him to leave his wife! Microsoft had to rush in with strict limits on the bot’s capabilities after the backlash.

Lesson: This is a cautionary tale of what happens when AI is unleashed without humans in the loop. Without robust human understanding, AI can behave unpredictably, and it’s not something your customers will tolerate. Always keep humans in the loop.

3. Air Canada held responsible for chatbot misinformation

Air Canada tried to use a chatbot for customer service, but things went disastrously wrong for their brand when the bot made a promise it couldn’t keep. The chatbot told a customer they could claim a refund that didn’t exist under the airline’s policy. The airline tried to deny responsibility, but the customer had to take the company to court, which disagreed, ruling that the airline was accountable for the bot’s mistake.

The result? The airline had to honour a promise their chatbot made.

Lesson:  If your brand replaces people with AI to interact with customers, you’re still on the hook for what it says. Bots may make the promise, but your brand will take the blame when they fall short.

4. Klarna’s U-turn: “Nothing is as valuable as humans”

Payment firm Klarna once let chatbots handle the majority of customer queries. Now? They’re reversing course.

They’ve publicly acknowledged that real people offer something AI can’t – empathy, understanding, and genuine service. Their new model leans on AI for routine queries, but relies on real people to tackle complex issues and high-value customer interactions.

Lesson: Even tech leaders are learning that relationships can’t be outsourced to machines. AI can only go so far. Relationships are built on more than automation, they thrive on real human connection.

5. DPD’s chatbot meltdown

UK delivery firm DPD faced embarrassment when its chatbot started behaving bizarrely. Instead of helping a customer locate a missing parcel, the chatbot swore, insulted itself, and even wrote a poem about how terrible the company was. The exchange went viral, with one post racking up over 800,000 views in 24 hours. DPD blamed a recent system update and disabled the bot.

Lesson: An unhelpful bot is bad. A self-sabotaging bot is worse. AI that insults your brand isn’t just embarrassing, it’s damaging. Without the right checks, AI can quickly spiral, and your customers won’t wait around for you to fix it. Automation must streamline and support service, not undermine it.

6. Meta’s algorithm promotes misinformation

Meta’s AI-driven content algorithms have faced intense criticism for spreading misinformation. The system fails to understand nuance, especially the cultural and political context of posts. Without human oversight, posts containing harmful or misleading information were amplified rather than removed.

Lesson: AI can’t yet replace human judgment, especially where context is key. Algorithms work on patterns, not principles. When misinformation can have serious consequences, AI alone shouldn’t be trusted to make the right decisions, it needs human oversight.

7. IBM Watson’s £3.1B healthcare misstep

IBM invested billions into Watson for Oncology – a medical AI meant to assist in diagnosing and recommending cancer treatment.

But it fell short. Watson gave unsafe recommendations and often relied on hypothetical, rather than real, data, leading to concerns about its reliability in clinical settings.

Lesson: AI cannot replace the expertise and judgment of qualified human professionals. In high-stakes industries like healthcare, real people make the difference.

Why the human touch still wins

These examples show the real-world consequences of what happens when AI goes rogue.  The damage? Lost revenue, customer churn, bad press, and long-term trust erosion. And the common thread? Businesses underestimating the irreplaceable value of real human connection.

At AnswerConnect, we’re proud to do things differently. Our trained, real-human receptionists answer every call with empathy, clarity, and professionalism – 24/7. No bots, no confusion.

Because customer care isn’t just about being available: it’s about being real.

What customers really think about AI in support

We recently surveyed consumers to understand their views on AI in customer service. Here’s what we found:

  • 4 in 5 would rather speak to a real person than a chatbot.
  • Just 8% said their issue was fully resolved by AI.
  • 42% said they’d trust a company less if it relied on bots for customer support.

No bots. Just people.

At AnswerConnect, we support technology, but not when it misleads or masks itself as human.

That’s why we’ve Pledged People, Not Bots, and taken a clear stand: 

  • No Bots pretending to be real people. 
  • No fake empathy/AI impersonating identities or emotions.
  • No confusing automation in place of real support.

When brands blur those lines, it doesn’t build trust –  it breaks it.

We believe your customers deserve honesty, clarity, and connection. That’s something only real people can offer.

Want real support from real people? See how AnswerConnect works.

Want real customer service? Let’s talk.

Customers don’t want robotic interactions – they want to feel seen, heard, and understood.

If you’re tired of bots, we’ve got real people ready to help your callers – any time, day or night. Experience the difference that genuine human connection makes.

Or learn more at www.answerconnect.co.uk