The 25th International Conference on Digital Audio Effects (DAFX) took place from Tue. 6 to Sat. 10 Sept. 2022, at the Vienna University of Music and Performing Arts (mdw), Austria.

The Digital Audio Effects conferences are meeting points for scientists, engineers, musicians and artists who are developing and applying technology to audio in all its forms. DAFx is a blend of oral and poster presentations of peer-reviewed papers, keynote addresses, tutorials and demonstrations. Several social events are held for the attendants, including a concert and the traditional banquet.

The conference title originated from a European Union’s COST-G6 project and the first public event, the DAFx’98 Workshop, took place in Barcelona. There is now a long standing tradition of DAFx. Besides DAFx conference proceedings, two editions of the Digital Audio Effects (DAFx) book have been published.In the time, the original title has been given a much broader meaning encompassing all kind of sound and music information processing. An Audio Effect is just about anything that sounds!

This year, Gaël Richard gave a keynote speech about “Hybrid Deep Learning for Audio”.

Abstract: Machine Listening, or AI for Sound, is defined as the general field of Artificial Intelligence (AI) applied to audio analysis, understanding and synthesis by a machine. It can also be viewed as the intertwined domain of Machine learning and audio signal processing and is applicable in a wide range of fields.

For AI in general, the access to ever-increasing super-computing facilities, combined with the availability of huge data repositories (although largely unannotated), has permitted the emergence of a significant trend with pure data-driven deep learning approaches. However, these methods only loosely take into account the nature and structure of the processed data.

We believe that it is important to rather build hybrid deep learning methods by integrating our prior knowledge about the nature of the processed data, their generation process or if possible their perception by humans. We will illustrate the potential of such model-based deep learning approaches in terms of complexity and controllability on different audio applications, including audio source separation.