This repository contains the LabVIEW source code for the MethodsX paper, "Automation of binaural headphone audio calibration on an artificial head". (2020-11-03)
This repository contains the SINGA:PURA dataset, a strongly-labelled polyphonic urban sound dataset with spatiotemporal context. The data were collected via a number of recording units deployed across Singapore as a part of a wireless acoustic sensor network. These recordings were made as part of a project to identify and mitigate noise sources in Singapore, but also possess a wider applicability to sound event detection, classification, and localization. The taxonomy we used for the labels in this dataset has been designed to be compatible with other existing datasets for urban sound tagging while also able to capture sound events unique to the Singaporean context. Please refer to our conference paper published in APSIPA 2021 (which is found in this repository as the file "APSIPA.pdf") or download the readme ("Readme.md") for more details regarding the data collection, annotation, and processing methodologies for the creation of the dataset.
This dataset contains the log-mel spectrograms for the augmented soundscapes described in our ICASSP 2022 submission "Probably Pleasant? A Neural-Probabilistic Approach to Automatic Masker Selection for Urban Soundscape Augmentation", in .npy format. The data can be accessed using the numpy package of Python, using the command numpy.load.
This dataset contains survey data collected from Nov 2020 to Jan 2021 to access the perceived indoor acoustic environment quality across building occupants in a tertiary-care public hospital in Singapore
This dataset contains survey data collected from October to November 2021.
This dataset contains the data used for all statistical analysis in our publication "Singapore Soundscape Site Selection Survey (S5): Identification of Characteristic Soundscapes of Singapore via Weighted k-means Clustering", summarised in a single .csv file.
Participant responses (2020-07-30)
An audio plugin for Unity based on Native Audio Plugin SDK and SOFA API_CPP that allow user to load HRTF data from public SOFA database and apply it to Unity audio sources
This tutorial aims to equip the participants with basic and advanced signal processing techniques that can be used in VR/AR applications to create a natural and augmented listening experience using headsets.
You can view more details on: Find out detailed information here! and on SigPortThis zip contained a documentation and HRTF database . Range dependent HRTF’s in the horizontal plane were measured at the anechoic chamber in the DSP lab at NTU Singapore. This database consists of HRTFs for 600 positions that consists of 75 azimuthal directions and for 8 distances in the near-field. All the measurements were carried on a HATS dummy head.
Environmental noise (also known as noise pollution)is a prevalent feature of any urban soundscape. Of the numerous environmental noise sources (e.g., aircrafts,road traffic, railways, industries, and construction),the World Health Organization(WHO) has identified road traffic noise as one of the main contributors to urban noise pollution..
With the strong growth of assistive and personal listening devices, natural sound rendering over headphones is becoming a necessity for prolonged listening in multimedia and virtual reality applications. The aim of natural sound rendering is to naturally recreate the sound scenes with the spatial and timbral quality as natural as possible, so as to achieve a truly immersive listening experience. However, rendering natural sound over headphones encounters many challenges. This tutorial article presents signal processing techniques to tackle these challenges to assist human listening.
We all are used to perceiving sound in a three-dimensional (3-D) world. In order to reproduce real-world sound in an enclosed room or theater, extensive study on how spatial sound can be created has been an active research topic for decades. Spatial audio is an illusion of creating sound objects that can be spatially positioned in a 3-D space by passing original sound tracks through a sound-rendering system and reproduced through multiple transducers, which are distributed around the listening space. The reproduced sound field aims to achieve a perception of spaciousness and sense of directivity of the sound objects. Ideally, such a sound reproduction system should give listeners a sense of an immersive 3-D sound experience. Spatial audio can primarily be divided into three types of sound reproduction techniques, namely, loudspeaker stereophony, binaural technology, and reconstruction using synthesis of the natural wave field [which includes Ambisonics and wave field synthesis (WFS)], as shown in Fig. 1(a).
In this review paper, we examine some of the recent advances in the parametric acoustic array (PAA) since it was first applied in air in 1983 by Yoneyama. These advances include numerical modelling for nonlinear acoustics, theoretical analysis and experimentation, signal processing techniques, implementation issues, applications of the parametric acoustic array, and some safety concerns in using the PAA in air. We also give a glimpse on some of the new work on the PAA and its new applications. This review paper gives a tutorial overview on some of the foundation work in the PAA, and serves as a prelude to the recent works that are reported by different research groups in this special issue.
The parametric loudspeaker provides an effective means of projecting sound in a highly directional manner without using large loudspeaker arrays to form sharp directional beams. It can be augmented with conventional loudspeakers to create a more immersive audio soundscape. Deployment of parametric loudspeakers in many public places where private messaging can make a difference in attracting attention, conveying messages without needing headphones, and creating private listening zones to reduce noise pollution. Digital signal processing plays a significant role in enhancing the aural quality of the parametric loudspeakers, and array processing can help to shape and steer the beam electronically. In addition, other signal processing techniques can also be applied to add more flexibility and improve the performance of parametric loudspeakers. These developments rely heavily on the latest techniques in acoustics and audio signal processing to overcome some of the current limitations in nonlinear acoustics modeling and ultrasonic transducers' technology. A useful feature in sound projection is to realize a highaccuracy digital beamsteering capability in air using an array of parametric loudspeakers. An in-depth study into the theoretical model of wave steering capability in parametric array in air can provide some hints on how we can best steer the demodulated signal in an efficient manner. As seen from this article, digital signal processing provides the main engine to achieve directional sound projection, and new digital processing techniques will be devised to provide a better quality, controllable audio beaming, and efficient sound focusing device in the future.
The problem of acoustic noise is becoming increasingly serious with the growing use of industrial and medical equipment, appliances, and consumer electronics. Active noise control (ANC), based on the principle of superposition, was developed in the early 20th century to help reduce noise. However, ANC is still not widely used owing to the effectiveness of control algorithms, and to the physical and economical constraints of practical applications. In this paper, we briefly introduce some fundamental ANC algorithms and theoretical analyses, and focus on recent advances on signal processing algorithms, implementation techniques, challenges for innovative applications, and open issues for further research and development of ANC systems.