Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

When AI Hallucinates: How ChatGPT Forced a Music App Founder to Build a New Feature

9:00 PM   |   09 July 2025

When AI Hallucinates: How ChatGPT Forced a Music App Founder to Build a New Feature

When AI Hallucinates: How ChatGPT Forced a Music App Founder to Build a New Feature

In the rapidly evolving landscape of artificial intelligence, large language models (LLMs) like ChatGPT have demonstrated remarkable capabilities, from generating creative text to assisting with complex problem-solving. However, alongside their impressive feats, these models are also prone to 'hallucinations' – confidently presenting false information as fact. This phenomenon recently presented a unique challenge to Adrian Holovaty, the founder of the music-teaching platform Soundslice, leading to an unexpected turn in his product development roadmap.

Holovaty, well-known in the tech community as one of the original creators of the popular Python web framework Django, launched Soundslice in 2012. The platform, which remains "proudly bootstrapped," according to Holovaty in an interview with TechCrunch, is designed to help students and teachers learn music more effectively. Its standout feature is an interactive video player that synchronizes music notation with performance videos, allowing users to see exactly what notes are being played at any given moment.

Beyond the synchronized player, Soundslice also incorporates AI into its functionality, specifically through a "sheet music scanner." This tool allows users to upload an image of traditional paper sheet music, which the AI then processes to create an interactive digital version. Holovaty closely monitors the error logs for this feature, using them as a crucial source of feedback to identify issues and plan improvements.

The Mystery of the Strange Uploads

It was within these error logs that the mystery began to unfold. For weeks, Holovaty noticed a recurring pattern of strange images being uploaded to the scanner – images that were clearly not traditional sheet music. Instead, they depicted screenshots of ChatGPT sessions. These screenshots contained conversations where users were interacting with the AI, and the AI was generating blocks of text and symbols known as ASCII tablature.

ASCII tablature, or ASCII tab, is a text-based system used primarily for notating guitar and bass music. Unlike standard musical notation which uses staves, clefs, notes, and rests, ASCII tab relies on standard keyboard characters to represent strings and frets. For example, a line of hyphens might represent an open string, while numbers on that line indicate which fret to press. It's a simple, accessible format for sharing basic musical ideas online, particularly among amateur musicians, but it lacks the richness and detail of standard notation and is not designed for automated processing by typical sheet music software.

The influx of these images was puzzling. While the volume wasn't overwhelming enough to cause significant technical or financial strain on Soundslice's infrastructure, their presence in the sheet music scanner's error logs was a persistent anomaly. Holovaty was mystified, as he recounted in a blog post detailing the experience. His scanner was built to interpret the visual patterns of standard musical notation, not the text-based structure of ASCII tab.

Unmasking the AI's Hallucination

The breakthrough came when Holovaty decided to interact with ChatGPT himself. He experimented with prompts related to music notation and Soundslice. That's when he discovered the source of the strange uploads: ChatGPT was confidently instructing users that they could take an image of the ASCII tab it generated, upload it to Soundslice, and the platform would convert it into hearable, interactive music.

This was a classic example of an AI hallucination. ChatGPT was presenting false information – a non-existent feature – with the same authoritative tone it uses for accurate information. Users, trusting the AI, were following its instructions, leading to the stream of incompatible images flooding Soundslice's scanner and generating error logs.

Holovaty realized that ChatGPT, unintentionally, had become a powerful, albeit inaccurate, promoter for Soundslice. It was driving traffic and sign-ups by suggesting a capability the app didn't possess. However, this unexpected promotion came at a significant cost: reputational damage. New users arriving with the false expectation set by the AI would inevitably be disappointed when the promised feature failed to work. As Holovaty explained to TechCrunch, "The main cost was reputational: new Soundslice users were going in with a false expectation. They'd been confidently told we would do something that we don't actually do."

The Difficult Decision: Respond to Misinformation?

Faced with this peculiar situation, Holovaty and his team debated how to respond. The options were limited and none were ideal. They could add disclaimers across the Soundslice site, explicitly stating that the scanner does not support ASCII tab or ChatGPT screenshots. This approach would manage user expectations but might also highlight a limitation and potentially deter users who arrived specifically because of the AI's suggestion.

Alternatively, they could invest resources into building the feature that ChatGPT had hallucinated. This would involve developing the technical capability to process ASCII tab images and convert them into Soundslice's interactive format. This was a feature Holovaty had never planned to build, considering ASCII tab a niche and technically challenging format compared to standard notation.

The dilemma was stark: Should a company alter its product roadmap and expend development resources in direct response to misinformation generated by an external AI? It raised fundamental questions about the influence of powerful AI models on user behavior and the unexpected pressures they can place on businesses.

Turning a Hallucination into Reality

After careful consideration, Holovaty made the decision to build the feature. Soundslice would develop the capability to scan and process ASCII tablature images, effectively making ChatGPT's hallucination a reality. While he was happy to add a tool that users were clearly interested in (as evidenced by their attempts to use it), he couldn't shake the feeling that his hand had been forced in a "weird way."

This incident highlights a fascinating and potentially growing challenge in the age of pervasive AI. As users increasingly rely on LLMs for information and recommendations, the AI's outputs – even the incorrect ones – can directly influence consumer behavior and create unexpected demands on businesses. Soundslice's situation might be one of the first documented cases where an AI's hallucination directly led a company to develop a new product feature.

Soundslice uploaded images showing ChatGPT sessions with ASCII tab
Image Credits: Adrian Holovaty

The Broader Implications: AI as an Unpredictable Salesperson

The story quickly resonated with the tech community, particularly on platforms like Hacker News. One interesting perspective shared by fellow programmers compared the situation to an over-eager human salesperson who promises features that don't exist, thereby creating pressure on the development team to deliver. Holovaty found this comparison "very apt and amusing," acknowledging the parallel between an AI confidently misrepresenting a product and a human doing the same.

This analogy, while humorous, underscores a serious point. Unlike a human salesperson who can be retrained or corrected, correcting a widespread AI hallucination across millions of interactions is a far more complex problem. The AI doesn't learn from individual user failures or company complaints in real-time in a way that would prevent it from repeating the false claim to the next user.

The Soundslice case raises important questions for businesses operating in the AI era:

  • How should companies monitor and respond to AI-generated misinformation about their products or services?
  • What is the responsibility of AI developers to mitigate hallucinations that could negatively impact third parties?
  • Will AI hallucinations increasingly influence consumer expectations and, consequently, product development priorities?
  • How can companies differentiate between AI-driven "hype" and genuine market demand?

The technical challenge of processing ASCII tab also warrants a closer look. While seemingly simple, converting the text-based grid into structured musical data that can be synchronized with video and manipulated interactively is non-trivial. Standard optical music recognition (OMR) systems are trained on the visual patterns of traditional notation. Developing a system to accurately parse the variable formatting and conventions of ASCII tab requires a different approach, essentially building a new type of music recognition engine specifically for this text-based format. This adds weight to Holovaty's initial reluctance and the feeling of being pushed into this development.

Moreover, this incident highlights the unpredictable nature of user interaction with LLMs. Users are not just asking questions; they are increasingly using AI outputs as instructions for interacting with other digital tools and services. When the AI's understanding of those tools is flawed, it creates friction and frustration downstream.

Navigating the Future of AI and Product Development

The Soundslice story serves as a cautionary tale and a fascinating case study on the unexpected ways AI can impact businesses. While AI can be a powerful tool for innovation and efficiency, its current limitations, particularly the propensity for confident hallucinations, can create unforeseen challenges.

For bootstrapped companies like Soundslice, every development decision is critical. Resources are limited, and prioritizing features is essential for survival and growth. Being compelled to build a feature not based on organic user requests or strategic planning, but on an AI's fabricated claim, is a unique burden.

This situation may become more common as AI models become more integrated into daily life and users rely on them for information about a vast array of products and services. Companies may need to develop new strategies for monitoring the AI landscape, detecting misinformation about their offerings, and deciding how – or if – to counteract it.

Ultimately, Adrian Holovaty's decision to build the ASCII tab feature transforms a negative situation into a potentially positive one. By addressing the AI-driven demand, Soundslice gains a new capability that some users clearly desire, even if that desire was initially sparked by a hallucination. It's a testament to the adaptability required in the tech industry, especially when navigating the unpredictable currents of artificial intelligence.

The incident underscores the need for both AI developers and the companies affected by AI outputs to consider the downstream consequences of hallucinations. As AI becomes more integrated into the fabric of the internet and user workflows, the line between AI-generated information and perceived reality blurs, creating novel challenges for product development, user trust, and business strategy.

While the comparison to an over-eager salesperson is amusing, the reality is more complex. AI operates at a scale and with a level of confidence that can amplify misinformation far beyond what any human could achieve. Soundslice's experience is a vivid illustration of this new dynamic and the creative, sometimes forced, adaptations businesses may need to make in the age of AI.

The future may see companies actively monitoring AI outputs for mentions of their products, engaging with AI developers to correct factual errors, or even, like Soundslice, choosing to build features to align with AI-generated user expectations. The relationship between AI, users, and product development is still being written, and stories like this provide valuable early chapters.

For more insights into the challenges and opportunities presented by AI, explore articles on TechCrunch's AI coverage or delve into how AI is impacting various industries on Wired or VentureBeat.