Air Canada’s Chatbot & Misinformation

What Happed?

  • A traveller used an airline’s official website chatbot to ask about bereavement-fare rules.
  • The chatbot incorrectly stated that the traveller could apply for a bereavement discount after travelling.
  • The traveller followed this advice, completed the trip, and later applied for the refund.
  • The airline refused, saying the policy clearly required the discount to be requested before travel.

The Dispute

  • The airline argued the chatbot was “separate” and that the company wasn’t responsible for what it said.
  • The tribunal disagreed, ruling that the airline was fully responsible for all information published on its website, including the chatbot.
  • The traveller was awarded compensation.

Why This Case Matters:

This incident became widely reported because it highlights a growing reality:

  • If your chatbot provides incorrect information, your organisation is still accountable.

The case shows that:

  • AI explanations are treated the same as written website content.
  • Businesses must verify that their chatbot aligns with official policy.
  • The chatbot made the mistake” is not an acceptable defence.

Mixes Styles, Tones, or Instructions

  • Conflicting instructions confuse the model and cause unstable behaviour.

Doesn’t Say What To Do When the Answer Isn’t Available

  • Most hallucinations come from this one mistake.
  • If the prompt doesn’t allow “I don’t know,” the bot will make things up.

Relies on the Model’s Default Behaviour

  • The default behaviour is not tailored to your organisation.
  • Without a clear prompt, the bot behaves like a general-purpose AI—not a council, tourism, or support assistant.

What Went Wrong?

This failure illustrates several common chatbot design issues:

No Boundaries

The chatbot confidently gave an answer even when it should have:

  • asked for clarification
  • provided a disclaimer
  • or redirected to a human agent

Incorrect or Unverified Knowledge

  • The response contradicted the airline’s official policy.

Incorrect or Unverified Knowledge

There was no system in place to:

  • review chatbot answers
  • validate policy-related replies
  • update or test the bot regularly

Incorrect or Unverified Knowledge

There was no system in place to:

  • review chatbot answers
  • validate policy-related replies
  • update or test the bot regularly

Poor Prompt Design

The chatbot wasn’t instructed to:

  • avoid giving definitive answers on complex policy matters
  • handle uncertainty with caution
  • escalate sensitive queries
  • check official sources before responding
airplanes window view of sky during golden hour