Skip to main content

Created by Tom Schaap and Javier Merrill. 

In the last few decades, a lot of efforts have been made to ease the way in which we explore large datasets (from principal components to UMAP, etc.). These methods normally exploit the relationships within the observations of a dataset to provide some sort of visual output that ultimately enables human inspection of those relationships. Data exploration tools aim to guide us towards the solution of a problem, allowing us to choose data-processing pipelines, or even spark a new idea on how to tackle some data-related problem.

Through a very long evolutionary process humans have not only developed very sharp visual capabilities, but also a very interesting way of perceiving and processing sound. These two senses differ, of course, in the physical phenomenon that they evolved to capture: electromagnetic waves vs. mechanical waves, but also in our ability to extract features and perceive change. Our hearing capacity is not as popular when it comes to data analysis and exploration compared to visual plots, probably because of the extra effort required to publish audio compared to print. But today audible translations of data are commonplace in science communication from the astronomy community. In this case they’re pointing the telescope to the sky, but geologists tend to point the microscope to the ground instead. See here for an example of astronomical data being ‘sonified’. So why can’t we do the same kind of thing in the geosciences?

Anyone who has worked with hyperspectral datasets will empathise with the act of familiarising with a new large dataset. Wiggly lines are easy to plot and look at, nonetheless to perceive subtle changes that can occur at several wavelengths at the same time is a difficult and slow process, mainly because we can only really focus our eyes on one part of the spectra at a time, as we scroll through different measurements. What if we could set our attention to multiple wavelengths at the same time while navigating a spectral dataset?

And so the idea came, let’s convert the spectrum into sound, then close your eyes and let yourself navigate through the realms of the dataset, just like someone with a metal detector on the beach at sunset.

Methods

Dataset

We used the publicly available data at the National Virtual Core Library (NVCL) via nvcl-kit aided by the CorStruth website to find an interesting core with enough changes to produce a triple J Hottest 100 worth tune. We chose to analyse data from the 84427_KI062_Dolphin core from the Dolphin tungsten mine on King Island because it appeared to exhibit an interesting variety of mineral domains. It was also a manageable length at only ~130 m.

Figure 1: Hull-removed reflectivity for each wavelength throughout the 84427_KI062_Dolphin  drillhole. Hotter colours indicate a deeper absorption feature.

Spectra to sound transform

The basic principle of our spectrum-to-sound conversion was to translate the analysed electromagnetic wavelengths into frequencies within the audible range. However, simply applying a multiplication across the whole spectrum resulted in a noisy output and an unpleasant listening experience. Instead, our workflow consisted of the following:

  • Crop the EM spectrum to the most “interesting” wavelengths: In our case, it seemed most of the variation relevant to mineralogical features was present in the 1800-2500  nm range. Reducing our analysis to this portion helps to focus the ear on important subtleties.
  • Extract peak wavelengths from each spectra: Many of the absorption features in these data can be broad, and translating absolutely every consecutive wavelength measurement into a sound frequency is a recipe for extreme aural dissonance. Instead, for each sample, we take the most prominent peaks and translate those to sound. This results in a unique set of peak frequencies and associated reflectance values for each sample.
  • Map the peaks to a harmonic set: It would be easy to simply convert the peak wavelengths to audible frequencies through some multiplication factor, but instead we chose to map the most common peaks to a set of defined notes as used in music. The reason for this is that a) it is nicer to listen to, and b) when notes are in tune, it is easier to pick up when they start to deviate and become flat or sharp. In our case, we took the 8 most-common peak wavelengths and mapped them to a series of perfect fourths and all frequencies between the peaks were linearly interpolated.

Figure 2: Example sample spectrum with relative locations of assigned notes. Note how not all peaks in this spectrum align with the notes, as they were picked based on the total of all samples.

  • Construct the soundwaves: Finally, the absorption feature at each peak was translated to a sine wave with the corresponding audible frequency and magnitude equal to the inverse of the feature’s reflectance (a greater absorption feature is thus louder). For each sample, the series of sine waves were simply summed to generate a harmonised sound for that sample. 

Results

The following video presents the results of our hyperspectral-to-audio conversion in 10 cm samples down the hole, where each sample is represented by 100 milliseconds of audio. You can also watch the accompanying spectra and mineral counts and see if you can associate what you are seeing with what you are hearing. We recommend slowing down the playback speed to really immerse yourself.

What can you hear? Many of the variations present in this output are quite subtle, but there are also some abrupt changes which even the untrained ear should pick up on.

  • Carbonate zone ~112 m (1:37) listen to how the bass end of the audio dies out because carbonates lack features in the long-wave part of the spectra
  • Throughout the hole, you will notice intermittent “blips” or nuggety sections where there is a significant change in the spectra over a short interval. Some of these blips appear to correspond with spectra that look characteristic of gypsum.
  • 16.5-18 m (0:06) a high pitch note flattens, suggesting a change in the chemical composition of a mineral, specifically this feature corresponds to a vibrational overtone of the O-H bond in water.
  • ~35 m (0:24), changes in the mid range can be heard in a kaolin/mica zone, creating a chaotic harmony for a couple of seconds

Conclusions

We produced an audio signal that represents the mineralogical variability of a drill core hyperspectral dataset. It is possible to recognise complex patterns and transitions between geological zones, as well as the mineral heterogeneity of the rocks analysed as you progress down the hole. A large area for improvement could be in the transform function that converts light wavelengths into sound frequencies, particularly by giving a more thorough musical sense to it. For example, associating minerals to chords within the same key if they belong to geologically meaningful assemblages, creating harmonic transitions similar to those in music when transitioning through geological units. Also, mapping the sounds to MIDI instruments (digital impersonations of real instruments) would make the output sound more like a real band, rather than just a set of stacked sine waves.

This type of approach has been well used in science communication by astronomers (transforming the cosmic background into sound), biologists (making whale singing audible) and others, captivating the interest of people around the world about topics that may be hard for a general population to understand or access, especially for people with special needs such as a visual impairment. In a society where activities such as mining are being heavily questioned, but at the same time an understanding of its vital role in everyday life seems to be missing, narrowing the gap between a better public understanding of the geosciences and their role in our everyday lives is a matter of duty.