Machine Listening, or AI for Sound, is defined as the general field of Artificial Intelligence applied to audio analysis, understanding and synthesis by a machine.
The access to ever increasing super-computing facilities, combined with the availability of huge data repositories (although largely unannotated), has led to the emergence of a significant trend with pure data-driven machine learning approaches. The field has rapidly moved towards end-to-end neural approaches which aim to directly solve the machine learning problem for raw acoustic signals but often only loosely taking into account the nature and structure of the processed data.
The main consequences are that the models are:
The aim of Hi-Audio is to build such hybrid deep approaches combining parameter-efficient and interpretable signal models, musicological and physics-based models, with highly tailored, deep neural architectures.
The research directions pursued in Hi-Audio will exploit novel deterministic and statistical audio and sound environment models with dedicated neural auto-encoders and generative networks and target specific applications including speech and audio scene analysis, music information retrieval and sound transformation and synthesis