ΒιΆΉΤΌΕΔ

Audio AR: Geolocated Sound

An experiment in geolocated audio - and what we discovered.

Published: 13 March 2019

In our first post on audio-led AR, R&D's Henry Cooke discussed why we’re interested in the technology. After going on a soundwalk, we decided that a interesting next step would be to port our existing experiment in geolocated audio to our Bose Frames devkit: . Here, - until recently a Software Engineer at ΒιΆΉΤΌΕΔ R&D - describes the port and what he learned by making it.

Alluvial Sharawadji is β€œcrowdsourced soundwalk” work made for the in Catalonia in summer 2018. For the piece, Tim & I built a mobile-friendly web app allowing participants to record sounds, which would be saved along with the current geolocation of the participant. We then used and the Web Audio API to present a β€œvirtual soundwalk” around the town.

The Sharawadji soundwalks are represented as remotely stored sound files with associated latitude and longitude coordinates. The coordinate mapping and the sound files are downloaded when the soundwalk starts.

For this rebuild, we use an iPhone’s GPS to position sounds around the listener, and calculate the volume level of each sound based on its distance from the listener. We then use the (Google VR) library to render the sounds spatially in real time, giving us a dynamic soundscape that changes in response to the listener’s position in the real world.

The first soundwalk we tested was composed of sounds recorded by various members of our team with their phones around our office in White City, London. Most of these sounds were fairly quiet, ambient recordings with lots of traffic sounds, construction noises and passers-by. At this point, we were using a β€œrolloff” mechanism from the Resonance Audio library – β€œrolloff” being the gradual attenuation of sound volume as the listener moves away from it.

We found that ambient sounds, especially those recorded in the environment where they’re being played back, tend to be drowned by real-world noises, and so don’t provide enough immersion. Furthermore, the built-in rolloff mechanism in Resonance Audio seems unsuitable for our coordinate system, as to walk a longitudinal distance of one degree is to walk about 51 miles in London. Due to the small changes in coordinate values, there was no perceptible rolloff effect at all.

For the second test, we β€œplaced” three pieces of music at approximately 150 metre intervals along Wood Lane. We also ported the Inverse Square Law-based volume attenuation function from the Sharawadji webapp into our demo, and switched off the Resonance Audio rolloff. We found the custom attenuation worked very well, resulting in well-localised sounds – at about 5-10m from the sound location, we could hear a faint hint of the music, and following the direction it seemed to come from, we arrived at the spot where it played the loudest.

We are very excited about the possibilities of audio AR technology, and we’re keen to run more experiments with it. From our tests so far it seems that with the right sound material and adjustments to the placement and attenuation of sounds, it could prove an interesting platform for new localised sound experiences.

  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the ΒιΆΉΤΌΕΔ. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: