Machine Listening, or AI for Sound, is defined as the general field of Artificial Intelligence applied to audio analysis, understanding and synthesis by a machine.

The access to ever increasing super-computing facilities, combined with the availability of huge data repositories (although largely unannotated), has led to the emergence of a significant trend with pure data-driven machine learning approaches. The field has rapidly moved towards end-to-end neural approaches which aim to directly solve the machine learning problem for raw acoustic signals but often only loosely taking into account the nature and structure of the processed data.

The main consequences are that the models are:

  • Overly complex, require massive amounts of data to be trained and extreme computing power to be efficient (in terms of task performance)
  • Remain largely unexplainable and non-interpretable.
  • Objectives
    To overcome these major shortcomings, we believe that our prior knowledge about the nature of the processed data, their generation process and their perception by humans should be explicitly exploited in neural-based machine learning frameworks.

    The aim of Hi-Audio is to build such hybrid deep approaches combining parameter-efficient and interpretable signal models, musicological and physics-based models, with highly tailored, deep neural architectures.

  • Research directions

    The research directions pursued in Hi-Audio will exploit novel deterministic and statistical audio and sound environment models with dedicated neural auto-encoders and generative networks and target specific applications including speech and audio scene analysis, music information retrieval and sound transformation and synthesis