Tuning in to the Sound of the Universe

Titles: (1) A Universe of Sound: processing NASA data into sonifications to explore participant response & (2) Evaluating the effectiveness of sonifcation in science education using Edukoi

Authors for Paper 1: Kimberly Kowal Arcand, Jessica Sarah Schonhut-Stasik, Sarah G. Kane, Gwynn Sturdevant, Matt Russo, Megan Watzke, Brian Hsu, and Lisa F. Smith

First Author’s Institution for Paper 1: Department of High Energy Astrophysics, Smithsonian Astrophysical Observatory, Cambridge, MA, United States

Authors for Paper 2: Lucrezia Guiotto Nai Fovino, Anita Zanella, Luca Di Mascolo,  Michele Ginolf, Nicolò Carpita, Francesco Trovato Manuncola, and Massimo Grassi

First Author’s Institution for Paper 2: Department of General Psychology, University of Padova, Padua, Italy

Status: (1) Published in Frontiers [open access] & (2) Published in Springer Link [open access]

How do astronomers share the universe with those that are blind or have partial vision? Today’s bite is a double feature on sonification or using non-speech audio (instrumental, tones, synthetic sounds, etc.) to represent data! While this may be your first time hearing about sonification, if you’ve ever heard the chime of a clock every hour, the screech of a fire alarm, the chirp of a gravitational wave, or the click of a Geiger counter in physics class, it is certainly not your first time hearing a sonification. Sonifications benefit everyone, not just those that are blind or low-vision (BLV), but finding a way to sonify astronomy data to effectively convey the interesting science sighted astronomers see can be challenging!

Our first paper today from Arcand et al. analyzes the response to the sonifications featured in Chandra X-ray Observatory’sUniverse of Sound”. The Universe of Sound is a NASA funded project featuring sonifications of astronomy images that arose during the COVID-19 pandemic to continue to share astronomy with the BLV community while maintaining physical isolation (e.g. not through a tactile 3D model–also see the tactile universe). Researchers surveyed 3,000 users after listening to three of the sonifications, from the Universe of Sound: the Galactic Center (watch below), Cassiopeia A, and the Chandra Deep Field South to learn more about listeners’ experiences with the sonifications from the project.

The Galactic Center composite (multi-wavelength) sonification from the Universe of Sound. The visual data in this image is sonified from left to right using three instruments. On screen an intensity spectrum spans from the top to the bottom of the image and moves vertically from left to right as the sonification plays, the peaks in this spectrum represent the bright areas of the image contributing the most to the sonification at that moment.

The survey had overwhelmingly positive results with both self identified BLV and sighted participants reporting they enjoyed the sonification experience! While BLV participants were more likely to report learning more from the sonification than sighted participants, both groups of participants agreed that the added audio component enhanced their experience with the astronomical images. However, when researchers asked for recommendations to improve the sonifications, they found two prominent themes:

  1. Participants often misunderstood the sonifications, specifically how the visual image and the sound were linked. 
  2. Participants had many comments about changing the sonification mapping, changing sounds, pitch, instruments, etc.
For the Galactic Center: The length is 1.04 minutes per piece for a total time of 4.16 minutes. There are three instruments used, the glockenspiel, the string, and the piano. There are three individual sonifications and one composite sonification. The wavelengths sonified are X-ray from the Chandra Observatory, Optical from the Hubble Space Telescope, and Infrared from the Spitzer Space Telescope. The sonification progresses across the visual image from left to right. Finally, the goal of the sonification is to communicate detectable structures in different wavelength regimes and highlight the high density and activity that is present near the Galactic Center. For Cassiopeia A: The length is 42 seconds for the first five and 21 seconds for the sixth for a total time of 3.52 minutes. There are five instruments used, the double bass, the cello, the viola, and two violins. There are five individual sonifications and one composite sonification. The wavelength sonified is X-ray, just for several elemental abundances. The sonification progresses across the visual image radially from the center outward on four paths. Finally, the goal of the sonification is to reveal the chemical emissions throughout the debris field and highlight the remnant’s shape and structure. For the Chandra Deep Field South: The length is 48 seconds. This sonification uses synthetic sounds. There is only one sonification. The wavelength sonified is X-ray for low, medium, and high energies. The sonification progresses across the visual image from bottom to top. Finally, the goal of the sonification is to demonstrate the extensive range of X-ray energies/frequencies and to demonstrate the black hole number density.
Table 1: A summary of the characteristics for each sonification used in the study including the length, types of sounds (instruments or synthetic), the number of pieces making up the sonification, the wavelength range sonified, how the sounds trace the visual data across the image, and the education goal of the sonification.

Table 1 describes more about the sonification mapping for each of the images. Not only does each sonification feature different sounds, they also trace the visual image differently. It is important to note that there is no sonification standardization which can make it confusing for both sighted and the BLV community to interpret astronomical data broadly across multiple datasets especially when there is no training accompanying the sonification explaining how the sounds map to the data. This is where our second paper of the day from Fovino et al. comes in!

​​ Section 5.1 of Fovino et al. includes an interesting literature review about the success of other projects mapping color to sound. The TLDR (too long, didn’t read) is that it’s really difficult to train participants to associate a color accurately (success rates between 54-81%) with either a specific instrument or pitch in a short amount of time (with training ranging from 10 minutes to 3 hours). This common type of sonification mapping would work best for those who identify as having “perfect pitch” or the ability to identify the pitch chroma (akin to the hue of a color but for sound) of an isolated note which is a rare ability that an estimated 1 in 10,000 people possess (Deutsch 2013).

Noting the difficulty in past studies with participants successfully recalling the sonification mapping when using different instruments or pitches, Fovino et al. elected to try a different type of sonification mapping that utilizes “natural” sounds to associate to color: the sound of a crackling fire for red, water bubbles for blue, birds and rustling leaves for green, and composites of the base sounds for other colors (mixing the sounds like we mix paint to make new colors!). They tested the association with the natural sounds using a sonification tool they named Edukoi in elementary and middle school classrooms. Edukoi is a derivative of a sonification tool named Herakoi which uses a machine learning based algorithm and a webcam to track where users hands are touching an image and then sonifies the pixel it perceives the person touching.

The image is broken into two rows. The top row shows the “sketches” of galaxies. The first image in the row has a blue semi-elongated circular core with 4 thick arms, 2 arms reaching obliquely from each side of where the core is elongated. The center image in the top row is a red ellipse. The last image in the top row is similar to the first image and also blue but the core of the sketched galaxy is circular and there are 6 arms that are thinner. The bottom row shows real images of galaxies. The first galaxy image is very bright, it has an elliptical core with two predominant swirling obliquely from the most elongated parts of the core. The second galaxy image is an ellipse with a dense core that gradients towards the edge of the ellipse. The final galaxy image has a dense bright ellipse core with many arms swirling out from the most elongated parts of the core. The arms become less dense as they curl further away from the center of the galaxy.
Figure 1: Top: A few examples of galaxy “sketches” students were asked to identify the shape and color of using Edukoi during the second visit to the classroom. Bottom: A few examples of real galaxy images students were asked to identify the shape and color of using Edukoi during the second visit to the classroom.

The researchers visited classrooms and trained the students on the natural sound-color mapping for 5 minutes and then asked students to identify the color of obscured geometric shapes using Edukoi. In a second test they asked students to not only identify the color, but also the shape of the obscured picture. Researchers returned to classrooms two months later with additional tasks to see if students were able to remember the sonification mapping. This time they asked students to identify the shape and color of obscured images of sketched and real galaxies using Edukoi, you can see examples of sketches and images in Figure 1.

Fovino et al. found that students were able to match the sound to a color close to 100% of the time when one of the colors was present and near 75% of the time when composite colors were present in images. More excitingly, students were able to recall the sonification mapping two months later with similar success rates. However, students struggled with correctly identifying galaxy shapes reliably with the average performance for correct galaxy shapes peaking at 60%. The researchers note that performances for shape identification may improve if there is a tactile guide in tandem with the sonification, like a haptic response on a tablet!

There has been great progress towards making astronomy accessible for those with visual disabilities, but there is still a long way to go which is why it is imperative to support and share projects like the Universe of Sound and Edukoi/Herakoi. It’s important to remember our work as astronomers is not truly “accessible” until everyone can participate and understand our science which I think is best highlighted by the quote from our first paper of today by Arcand et al.

“The public availability of astronomy data does not necessarily equate to the true accessibility and equity of that data, much as providing a sidewalk in a high-traffic area improves pedestrian safety but remains inherently inaccessible and inequitable without thoughtful design (by cutting the curb).”

If you want to learn more about sonifying your own astronomy data check out this astro[sound]bite and accompanying astrobite for some resources, to read more astrobites reporting on sonification check out this bite on a recap of the “Audible Universe” Conference, this bite on the “Dark Tour of the Universe” workshop, and this bite on creating audio descriptions for astronomy images!

Astrobite edited by Storm Colloms

Featured image credit: NASA/Chandra X-ray Observatory’s Universe of Sound

About Erica Sawczynec

I am a third year graduate student at the University of Texas at Austin working on NIR spectroscopy instrumentation. When I'm not in the lab I manage the archive for IGRINS (RRISA) and use the data products to study molecular hydrogen emission in circumstellar disks. Outside of work you can find me reading sci-fi and fantasy novels, baking bread, hanging out with my cat, or over on Twitter @EricaSawczynec.

Discover more from astrobites

Subscribe to get the latest posts to your email.

Leave a Reply