posted by organizer: emap || 3626 views || tracked by 5 users: [display]

SS DNAP IJCNN 2019 : Special Session on Deep Neural Audio Processing at IJCNN 2019

FacebookTwitterLinkedInGoogle

Link: https://www.ijcnn.org/
 
When Jul 14, 2019 - Jul 19, 2019
Where Budapest, Hungary
Submission Deadline Jan 15, 2019
Notification Due Feb 28, 2019
Categories    • computational audio analysis   deep learning   signal processing   sound event detection
 

Call For Papers

In many research fields, Deep Neural methods have reached state of the art performance superseding popular approaches used for decades. Such paradigm change involved also the audio processing field, where Deep Neural methods had a major impact in several research areas. In speech recognition, the advent of Deep Neural Networks provided significant reduction of word error rates in many popular tasks. Similar benefits have been registered in the speaker recognition and computational para-linguistics (e.g., emotion and speaker state recognition) research fields. Deep Neural methods found application also in fields where data-driven methods were scarcely employed, such as single and multi-channel speech enhancement and dereverberation. Such technologies have not remained confined to academic research, but they also found application in popular commercial products, such as Amazon Echo, Google Home, and Apple HomePod. Audio processing with Deep Neural methods has found application also in the music research field, such as in music information retrieval, automated music generation, and style transfer. Processing of environmental sounds has gained particular attention in the last years, and Deep Neural methods gained state of the art performance in tasks such as acoustic monitoring, audio tagging, acoustic scene understanding and sound separation, detection, and identification.
Neural methods achieved state of the art performance in the aforementioned research fields, but several challenges still remain open: robustness of recognition systems to environmental noise has a long history, and despite the recent advancements it still remains a topic worth of investigation. Adversarial attacks have proven successful to fool state of the art recognition models, thus posing important security issues. Usually, Deep Neural methods need large amounts of data in order to reach state of the art performance. In some application scenarios, however, the amount of data at disposal for training is scarce and techniques such as few-shot learning and transfer learning must be adopted. In other application contexts, the computational and memory resources of the executing device are limited, and neural models cannot be employed without modification. Model compression and knowledge distillation techniques have been gaining significant attention in the last years, since they allow to lower the computational and memory burden with little or no performance deterioration.
In the light of this analysis, it is indeed of great interest for the scientific community to understand how and to what extent novel Deep Neural Networks-based techniques can be efficiently employed in Digital Audio. The aim of the session is thus to focus on the most recent advancements in this field and on their applicability to Digital Audio problems. Driven by the constantly increasing success encountered at IJCNN2014 in Beijing (China), IJCNN2015 in Killarney (Ireland), IJCNN2016 in Vancouver (Canada), IJCNN2017 in Anchorage (Alaska, USA), and IJCNN2018 in Rio de Janeiro (Brazil) the proposers of this session are motivated to revive and exceed the experience and to build, in the long-term, a solid reference within the Computational Intelligence community for the Digital Audio field.

Topics include, but are not limited to:
• Computational audio analysis
• Deep learning algorithms in digital audio
• Knowledge distillation in digital audio applications
• Transfer learning, few-shot learning in audio applications
• Music information retrieval
• Music content analysis
• Speech and speaker analysis and classification
• Neural methods for music/speech generation and voice conversion
• Generative Adversarial Networks for Audio Analysis and Synthesis
• Privacy preserving computational speech processing
• Audio source separation using deep models
• Sound event detection
• Acoustic novelty detection
• Acoustic scene analysis
• End-to-end learning for digital audio applications
• Single and multi-channel audio enhancement with neural networks
• Robust audio processing towards adversarial attacks
• Unsupervised methods for audio analysis
• Attention-based Topologies
• Explainability in Deep Learning for Audio Processing

Manuscripts intended for the special session should be submitted via the paper submission website of IJCNN 2019 as regular submissions. All papers submitted to special sessions will be subject to the same peer-review review procedure as the regular papers. Accepted papers will be part of the regular conference proceedings.
Paper submission guidelines: https://www.ijcnn.org/paper-submission-guidelines

Related Resources

MATHCS 2024   2nd International Conference on Mathematics, Computer Science & Engineering
ISIR 2024   2024 5th International Conference on Information Security and Information Retrieval (ISIR 2024)
ENERGY 2024   2nd International Conference on Energy
ARIA 2024   11th International Conference on Artificial Intelligence & Applications
JANT 2024   International Journal of Antennas
DMNNDL 2024   2024 2nd International Conference on Data Mining, Neural Networks and Deep Learning
AMLDS 2025   2025 International Conference on Advanced Machine Learning and Data Science
Ei/Scopus- DMCSE 2024   2024 International Conference on Data Mining, Computing and Software Engineering (DMCSE 2024)
SS-ADL4BA-ICONIP 2024   Special Session on Advances in Deep Learning for Biometrics and Its Applications
ICVISP 2024   2024 8th International Conference on Vision, Image and Signal Processing (ICVISP 2024)