Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

When AI Fails: Why the User, Not the Algorithm, Is Truly to Blame

9:40 AM   |   01 June 2025

When AI Fails: Why the User, Not the Algorithm, Is Truly to Blame

When AI Fails: Why the User, Not the Algorithm, Is Truly to Blame

We have undeniably entered the Age of AI. Artificial intelligence is no longer a futuristic concept; it is integrated into our daily lives, powering everything from search engines and recommendation systems to creative tools and medical diagnostics. Its presence is permanent and ever-expanding.

One significant source of confusion surrounding AI stems from its ability to interact using human language. This has led to the development and marketing of AI companions, therapists, and even claims by some researchers that AI and robots possess feelings and thoughts. Even major tech companies like Apple have reportedly explored concepts like a lamp that can feel emotion, blurring the lines between tool and entity.

This anthropomorphism contributes to a crucial debate: Who is to blame when AI fails, hallucinates, or produces errors with real-world consequences? Headlines frequently pose this question:

Let's cut to the chase and deliver the straightforward answer upfront: The user is responsible.

AI is fundamentally a tool. Like any tool, its output and the consequences of its use are the responsibility of the person wielding it. Consider other tools: If a truck driver causes an accident because they fell asleep, the truck is not at fault. If a surgeon makes a mistake during an operation, the surgical instruments are not to blame. If a student performs poorly on a test, the pencil used to fill in the bubbles is not the culprit.

While it may seem overly simplistic to place the blame solely on the user, a deeper examination of real-world incidents involving AI errors reveals precisely why this perspective is not only logical but necessary for navigating the AI age responsibly.

When Writers Get Caught: AI Prompts and Fabricated Books

The creative and publishing worlds have provided some early, often humorous, examples of AI misuse and the resulting user accountability. Consider the case of fantasy romance author Lena McDonald. She was caught red-handed using AI to mimic another writer's style.

Her novel, Darkhollow Academy: Year 2, released in March 2025, contained an unedited AI prompt directly within the text of Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.” This wasn't a note to herself; it was published as part of the narrative, clearly copied and pasted from an AI chatbot's output alongside the generated prose she intended to pass off as her own.

This incident, while embarrassing, is not isolated. The year 2025 saw at least two other romance authors, K.C. Crowne and Rania Faris, similarly caught with AI-generated prompts embedded in their self-published works. These cases highlight a concerning trend where authors rely on AI not just for assistance but for outright generation, neglecting the fundamental steps of editing and verification.

The issue extends beyond individual authors to journalism. In a striking example from May 2025, the Chicago Sun-Times and The Philadelphia Inquirer published a syndicated “Summer Reading List for 2025.” This list featured 15 books supposedly by well-known authors. However, most of the books were entirely fabricated. Titles like Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir were presented as real books by real authors, but they simply did not exist.

The writer responsible, Marco Buscaglia, admitted to using AI to generate the list. While the article was syndicated and not produced directly by the newspapers that printed it, the error originated with the writer who used the AI tool. The responsibility for verifying the existence of the books and authors rested squarely on his shoulders.

In both the romance novel and journalism examples, the fault lies unequivocally with the user – the writer. A writer's role inherently includes editing, reviewing, and verifying their work. They must read what they produce, consider revisions, and ensure accuracy. These authors failed to perform these basic professional duties. They didn't even read their own output carefully enough to catch the embedded prompts or verify the existence of the books they were recommending.

While some publications employ dedicated fact-checkers, the primary responsibility for the accuracy of assertions and the validity of sources always falls on the writer. Writers are, in essence, their own first line of defense for editing and fact-checking. It's an integral part of the craft.

These real-life scenarios serve as clear illustrations: the writer, as the AI user, is directly responsible for errors produced by AI chatbots. The user selects the tool, crafts the input (prompt engineering), evaluates the output, and is solely responsible for correcting any inaccuracies or using the output without verification.

Frustrated young business man working on laptop computer at office
Credit: dotshock / Shutterstock

Beyond Writing: Bigger Errors, Same Accountability

The principle of user responsibility extends far beyond the realm of creative writing and journalism. AI errors have manifested in customer service, legal proceedings, and even healthcare, consistently highlighting the user's ultimate liability.

Last year, Air Canada's chatbot provided a customer with inaccurate information about a bereavement refund policy that did not exist. When the customer pursued the matter in a small-claims tribunal, Air Canada attempted to evade responsibility by arguing the chatbot was a “separate legal entity.” The tribunal rejected this argument outright and ruled against the airline. The airline, as the deployer and user of the chatbot in a customer-facing role, was held accountable for the information it provided, regardless of the source.

Google's AI Overviews, designed to provide quick summaries at the top of search results, famously became a source of ridicule after suggesting users add glue to pizza or consume small rocks. While Google quickly worked to mitigate these issues, the responsibility for the harmful or absurd suggestions ultimately lies with Google, the entity that deployed the AI and presented its output as authoritative information without sufficient safeguards or verification.

Apple also experienced a similar issue with its AI-powered notification summaries, which generated fake headlines. One notable fabrication included a false report about the arrest of Israeli Prime Minister Benjamin Netanyahu. Again, the company deploying the AI feature is responsible for the erroneous information it presents to its users.

In the legal profession, Canadian lawyer Chong Ke learned a harsh lesson about AI reliability when he cited two court cases provided by ChatGPT in a custody dispute. The AI had completely fabricated both cases. As a result, Ke was ordered to pay the opposing counsel’s research costs. His professional responsibility required him to verify legal precedents, a step he neglected by blindly trusting the AI's output.

Even in critical fields like healthcare, AI transcription tools have shown significant flaws. Reports from last year exposed major inaccuracies in AI-powered medical transcription, particularly tools based on models like OpenAI’s Whisper. Researchers found that Whisper frequently “transcribes” content that was never actually spoken. A study presented at the Association for Computing Machinery FAccT Conference revealed that approximately 1% of Whisper’s transcriptions contained fabricated content, and alarmingly, nearly 38% of these errors had the potential to cause harm in a medical context. While the AI model has limitations, the medical professional or transcriber using the tool is responsible for reviewing and verifying the transcription's accuracy before it becomes part of a patient's record or is used for diagnosis or treatment.

Every single one of these diverse examples points to the same conclusion: the errors and problems that arise from AI use fall squarely on the shoulders of the users or the entities deploying the AI. Any attempt to shift blame to the AI tools themselves stems from a fundamental misunderstanding of what AI is – a complex pattern-matching and prediction engine, not a sentient, responsible agent.

The Bigger Picture: Navigating the Age of AI Responsibly

The common thread running through all these incidents is the failure of the user to adequately supervise and verify the AI's output. Users treated the AI as an autonomous, infallible expert rather than a tool requiring human oversight.

At one extreme, we see the irresponsible use of unsupervised AI, leading to the errors detailed above. At the other extreme is the complete avoidance of AI tools, which can also be a mistake in the current technological landscape. Many companies and organizations, fearing the risks, have implemented outright bans on AI chatbot usage. While caution is warranted, a blanket ban can prevent employees from leveraging AI's potential benefits for productivity and innovation.

Successfully navigating the Age of AI requires finding a pragmatic middle ground. This involves integrating AI tools into our workflows to enhance our capabilities and efficiency, but doing so with a clear understanding that the human user remains in control and ultimately responsible. We must learn to use AI effectively, which includes developing critical skills in prompt engineering, evaluating AI output, and, most importantly, verifying every piece of information or content generated by the AI before it is used or disseminated.

The knowledge that any use of AI carries 100% user responsibility should drive a rigorous process of review and verification. AI can be a powerful assistant, but it is not a substitute for human judgment, expertise, or accountability.

It is likely that we will continue to see instances of irresponsible AI use leading to errors, problems, and potentially even significant negative consequences. However, the blame should not be directed at the software itself. The AI performs based on its training data and algorithms; it does not possess intent or consciousness.

As the fictional AI supercomputer HAL 9000 from 2001: A Space Odyssey famously stated, albeit in a different context, errors can often only be attributable to human error. In the case of AI failures, this rings true. The human user, who chose to use the tool, provided the input, and failed to verify the output, is the one accountable.

Embracing AI responsibly means accepting this accountability and implementing the necessary checks and balances to ensure that AI serves as a beneficial tool rather than a source of avoidable mistakes.