UpOrDown
Ping
Traceroute
MTR
DNS
HTTP
RBL Lookup
My IP
API

When AI Hallucinations Lead to Real Innovations

Image © Arstechnica
A sheet music platform, Soundslice, responded to an AI-created feature by developing it themselves, highlighting the tangible consequences of AI confabulation.

On Monday, sheet music platform Soundslice announced that it had developed a new feature after discovering that ChatGPT was incorrectly informing users that the platform could import ASCII tablature—a text-based guitar notation format that the company had never supported. This incident marks an intriguing instance of a business implementing a functionality based on an AI hallucination.

Typically, Soundslice digitizes sheet music from photos or PDFs and syncs the notation with audio or video recordings, allowing musicians to follow the music as they hear it. The platform also offers tools for slowing down playback and practicing challenging sections.

Adrian Holovaty, co-founder of Soundslice, explained in a recent blog post that the development process for the new feature initially baffled the team. A few months prior, Holovaty noticed unusual activity in the company’s error logs. Instead of standard sheet music uploads, users were submitting screenshots of ChatGPT conversations containing ASCII tablature, a simple text representation of guitar music using strings and fret numbers.

The company’s scanning system was not designed to support this type of notation, leading Holovaty to investigate further. After testing ChatGPT himself, Holovaty discovered that the AI was instructing users to create Soundslice accounts and import ASCII tabs for audio playback—a feature that was never part of their platform.

Holovaty expressed concern over this issue, stating, “ChatGPT was outright lying to people and making us look bad, setting false expectations about our service.” Recognizing the potential impact of such AI hallucinations, the company decided to implement the feature, effectively turning a false premise into a real functionality.

This case highlights a growing problem in AI development: models generating false or misleading information confidently, a phenomenon known as “hallucination” or “confabulation.” Since ChatGPT’s launch in November 2022, such inaccuracies have led users to mistakenly believe in capabilities that do not exist, demonstrating the need for ongoing improvements in AI reliability.

 

Arstechnica

Related News

Three UK Tops UK 5G Speeds in H1 2025, Vodafone Close Behind
Malicious Browser Extensions Exploit Users for Web Scraping Activities
KCOM Continues Commitment to Digital Inclusion with New Grants
Neos Networks Completes Major Full Fibre Network in Oxfordshire
Wildanet Expands Gigabit Broadband Coverage to 10,000 Premises in Cornwall
Virgin Media UK Ends BBC iPlayer Support on TiVo Devices

Cookie Consent

We use cookies to improve your experience on our site. By using our site you consent to cookies. Learn more