Virtual Acoustic Scene Acquisition by Smart Phones

At the German annual conference of Acoustics (DAGA) in Nuremberg ICoSOLE presented the paper Virtual Acoustic Scene Acquisition by Smart Phones by J.-M. Batke and J. Schmidt (both Technicolor R&I, Hannover).

The paper firstly outlined the European research project ICoSOLE, in which the generation and aggregation of user generated content into the professional workflow is the focal point of scope. After elaborating on the differences between professional acquisition techniques and the limited possibilities to make recordings with user style devices (smart phones, tables, digital cameras etc.) the paper concentrated on the possibilities to capture immersive sound scenes involving distributed acoustic sensors, such as smart phones at different locations. Thanks to the ICoSOLE concept, these locations are well known and traced, and each of them is representing one acoustic object, which contributes to a three-dimensional scene when combined together. Possible approaches for generating and coding such sound scenes are either based on sound channels or as Higher Order Ambisonics signals. Both schemes have recently been specified in the MPEG-H standardization process.
Mobile user devices involved in sound acquisition

Mobile user devices involved in sound acquisition

The main scope of the paper, however, was to present first results of user generated recordings, which had been acquired at the occasion of the “Fete de la Musique 2014” in Hannover. Several recording devices, i.e. smart phones, portable audio recorder and digital cameras have been used for capturing the sound in a pedestrian area in close proximity to live music stages (see Figure). Mainly voice recordings had been performed in order to test the speech intelligibility after appropriate signal processing. It could be shown that speech intelligibility could be improved through enhancing the original signal with ambient sound, thus leading to a better spatial perception.