On Monday, sheet music platform Soundslice announced that it had developed a new feature after discovering that ChatGPT was incorrectly informing users that the platform could import ASCII tablature—a text-based guitar notation format that the company had never supported. This incident marks an intriguing instance of a business implementing a functionality based on an AI hallucination.
Typically, Soundslice digitizes sheet music from photos or PDFs and syncs the notation with audio or video recordings, allowing musicians to follow the music as they hear it. The platform also offers tools for slowing down playback and practicing challenging sections.
Adrian Holovaty, co-founder of Soundslice, explained in a recent blog post that the development process for the new feature initially baffled the team. A few months prior, Holovaty noticed unusual activity in the company’s error logs. Instead of standard sheet music uploads, users were submitting screenshots of ChatGPT conversations containing ASCII tablature, a simple text representation of guitar music using strings and fret numbers.
The company’s scanning system was not designed to support this type of notation, leading Holovaty to investigate further. After testing ChatGPT himself, Holovaty discovered that the AI was instructing users to create Soundslice accounts and import ASCII tabs for audio playback—a feature that was never part of their platform.
Holovaty expressed concern over this issue, stating, “ChatGPT was outright lying to people and making us look bad, setting false expectations about our service.” Recognizing the potential impact of such AI hallucinations, the company decided to implement the feature, effectively turning a false premise into a real functionality.
This case highlights a growing problem in AI development: models generating false or misleading information confidently, a phenomenon known as “hallucination” or “confabulation.” Since ChatGPT’s launch in November 2022, such inaccuracies have led users to mistakenly believe in capabilities that do not exist, demonstrating the need for ongoing improvements in AI reliability.