ABSTRACT: Recently, many virtual reality applications have been enhanced with the inclusion of real-time 3-D virtual sound. In a large proportion, this is achieved by implementing sound spatialization through the generation of binaural audio produced with the so-called Head-Related Transfer Functions (HRTFs). The sounds arriving at the ear from the source undergo modifications due to reflections and interactions by the listener's torso, shoulders, head and the pinnae. The HRTFs capture this information. The left and the right HRTFs contain all the information required by the listener to localize the source of the sound. Once acquired, pairs of left and right HRTFs can be used to process monaural sounds into binaural sounds that will give the listener the illusion of virtual placement of the sound.
A well-known shortcoming of HRTF sound spatialization is the presence of a "Cone of Confusion". Due to symmetry, subjects tend to confuse sounds virtually placed in the front hemisphere with sounds placed symmetrically in the back hemisphere. Observations from 20 subjects suggested that HRTFs from subjects with particularly protruding ears show accentuated spectral differences between symmetric locations. These HRTFs were helpful in resolving the cone of confusion when used by subjects with average or small pinna protrusions. For example, a subject with protruding ears showed a larger difference between the HRTFs from azimuths 30° and 150°, compared to the HRTFs from a person with smaller or average protrusion angle. This paper describes how to synthesize attenuation/amplification profiles based on those seen in HRTFs from protruding ears and how they are applied to modify the HRTFs of several subjects.