Titles: (1) A Universe of Sound: processing NASA data into sonifications to explore participant response & (2) Evaluating the effectiveness of sonifcation in science education using Edukoi
Authors for Paper 1: Kimberly Kowal Arcand, Jessica Sarah Schonhut-Stasik, Sarah G. Kane, Gwynn Sturdevant, Matt Russo, Megan Watzke, Brian Hsu, and Lisa F. Smith
First Author’s Institution for Paper 1: Department of High Energy Astrophysics, Smithsonian Astrophysical Observatory, Cambridge, MA, United States
Authors for Paper 2: Lucrezia Guiotto Nai Fovino, Anita Zanella, Luca Di Mascolo, Michele Ginolf, Nicolò Carpita, Francesco Trovato Manuncola, and Massimo Grassi
First Author’s Institution for Paper 2: Department of General Psychology, University of Padova, Padua, Italy
Status: (1) Published in Frontiers [open access] & (2) Published in Springer Link [open access]
How do astronomers share the universe with those that are blind or have partial vision? Today’s bite is a double feature on sonification or using non-speech audio (instrumental, tones, synthetic sounds, etc.) to represent data! While this may be your first time hearing about sonification, if you’ve ever heard the chime of a clock every hour, the screech of a fire alarm, the chirp of a gravitational wave, or the click of a Geiger counter in physics class, it is certainly not your first time hearing a sonification. Sonifications benefit everyone, not just those that are blind or low-vision (BLV), but finding a way to sonify astronomy data to effectively convey the interesting science sighted astronomers see can be challenging!
Our first paper today from Arcand et al. analyzes the response to the sonifications featured in Chandra X-ray Observatory’s “Universe of Sound”. The Universe of Sound is a NASA funded project featuring sonifications of astronomy images that arose during the COVID-19 pandemic to continue to share astronomy with the BLV community while maintaining physical isolation (e.g. not through a tactile 3D model–also see the tactile universe). Researchers surveyed 3,000 users after listening to three of the sonifications, from the Universe of Sound: the Galactic Center (watch below), Cassiopeia A, and the Chandra Deep Field South to learn more about listeners’ experiences with the sonifications from the project.
The survey had overwhelmingly positive results with both self identified BLV and sighted participants reporting they enjoyed the sonification experience! While BLV participants were more likely to report learning more from the sonification than sighted participants, both groups of participants agreed that the added audio component enhanced their experience with the astronomical images. However, when researchers asked for recommendations to improve the sonifications, they found two prominent themes:
- Participants often misunderstood the sonifications, specifically how the visual image and the sound were linked.
- Participants had many comments about changing the sonification mapping, changing sounds, pitch, instruments, etc.
Table 1 describes more about the sonification mapping for each of the images. Not only does each sonification feature different sounds, they also trace the visual image differently. It is important to note that there is no sonification standardization which can make it confusing for both sighted and the BLV community to interpret astronomical data broadly across multiple datasets especially when there is no training accompanying the sonification explaining how the sounds map to the data. This is where our second paper of the day from Fovino et al. comes in!
Section 5.1 of Fovino et al. includes an interesting literature review about the success of other projects mapping color to sound. The TLDR (too long, didn’t read) is that it’s really difficult to train participants to associate a color accurately (success rates between 54-81%) with either a specific instrument or pitch in a short amount of time (with training ranging from 10 minutes to 3 hours). This common type of sonification mapping would work best for those who identify as having “perfect pitch” or the ability to identify the pitch chroma (akin to the hue of a color but for sound) of an isolated note which is a rare ability that an estimated 1 in 10,000 people possess (Deutsch 2013).
Noting the difficulty in past studies with participants successfully recalling the sonification mapping when using different instruments or pitches, Fovino et al. elected to try a different type of sonification mapping that utilizes “natural” sounds to associate to color: the sound of a crackling fire for red, water bubbles for blue, birds and rustling leaves for green, and composites of the base sounds for other colors (mixing the sounds like we mix paint to make new colors!). They tested the association with the natural sounds using a sonification tool they named Edukoi in elementary and middle school classrooms. Edukoi is a derivative of a sonification tool named Herakoi which uses a machine learning based algorithm and a webcam to track where users hands are touching an image and then sonifies the pixel it perceives the person touching.
The researchers visited classrooms and trained the students on the natural sound-color mapping for 5 minutes and then asked students to identify the color of obscured geometric shapes using Edukoi. In a second test they asked students to not only identify the color, but also the shape of the obscured picture. Researchers returned to classrooms two months later with additional tasks to see if students were able to remember the sonification mapping. This time they asked students to identify the shape and color of obscured images of sketched and real galaxies using Edukoi, you can see examples of sketches and images in Figure 1.
Fovino et al. found that students were able to match the sound to a color close to 100% of the time when one of the colors was present and near 75% of the time when composite colors were present in images. More excitingly, students were able to recall the sonification mapping two months later with similar success rates. However, students struggled with correctly identifying galaxy shapes reliably with the average performance for correct galaxy shapes peaking at 60%. The researchers note that performances for shape identification may improve if there is a tactile guide in tandem with the sonification, like a haptic response on a tablet!
There has been great progress towards making astronomy accessible for those with visual disabilities, but there is still a long way to go which is why it is imperative to support and share projects like the Universe of Sound and Edukoi/Herakoi. It’s important to remember our work as astronomers is not truly “accessible” until everyone can participate and understand our science which I think is best highlighted by the quote from our first paper of today by Arcand et al.
“The public availability of astronomy data does not necessarily equate to the true accessibility and equity of that data, much as providing a sidewalk in a high-traffic area improves pedestrian safety but remains inherently inaccessible and inequitable without thoughtful design (by cutting the curb).”
If you want to learn more about sonifying your own astronomy data check out this astro[sound]bite and accompanying astrobite for some resources, to read more astrobites reporting on sonification check out this bite on a recap of the “Audible Universe” Conference, this bite on the “Dark Tour of the Universe” workshop, and this bite on creating audio descriptions for astronomy images!
Astrobite edited by Storm Colloms
Featured image credit: NASA/Chandra X-ray Observatory’s Universe of Sound
Discover more from astrobites
Subscribe to get the latest posts to your email.