posted by system || 3305 views || tracked by 7 users: [display]

SSCS 2010 : ACM Multimedia 2010 Workshop on Searching Spontaneous Conversational Speech

FacebookTwitterLinkedInGoogle

Link: http://www.searchingspeech.org
 
When Oct 29, 2010 - Oct 29, 2010
Where Firenze, Italy
Submission Deadline Jun 10, 2010
Notification Due Jul 10, 2010
Final Version Due Jul 20, 2010
Categories    multimedia   information retrieval   speech
 

Call For Papers

----------------------------------------------------------------------
CfP: ACM Multimedia 2010 Workshop on
Searching Spontaneous Conversational Speech (SSCS 2010)
-----------------------------------------------------------------------
Workshop held on 29 October 2010, in Firenze, Italy
in conjunction with ACM Multimedia 2010

Website: http://www.searchingspeech.org/

The SSCS 2010 workshop is devoted to presentation and discussion of recent research results concerning advances and innovation in the area of spoken content retrieval and the area of multimedia search that makes use of automatic speech recognition technology.

Spoken audio is a valuable source of semantic information, and speech analysis techniques, such as speech recognition, hold high potential to improve information retrieval and multimedia search. Nonetheless, speech technology remains underexploited by multimedia systems, in particular, by those providing access to multimedia content containing spoken audio. Early success in the area of broadcast news retrieval has yet to be extended to application scenarios in which the spoken audio is unscripted, unplanned and highly variable with respect to speaker and style characteristics. The SSCS 2010 workshop is concerned with a wide variety of challenging spoken audio domains, including: lectures, meetings, interviews, debates, conversational broadcast (e.g., talkshows), podcasts, call center recordings, cultural heritage archives, social video on the Web and spoken natural language queries. As speech steadily moves closer to rivaling text as a medium for access and storage of information, the need for technologies that can effectively make use of spontaneous conversational speech to support search becomes more pressing.

In order to move the use of speech and spoken content in retrieval applications and multimedia systems beyond the current state of the art, sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval is necessary. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007, SIGIR 2008 and ACM Multimedia 2009. The SSCS workshop series continues at ACM Multimedia 2010 with a focus on research that strives to move retrieval systems beyond conventional queries and beyond the indexing techniques used in traditional mono-modal settings or text-based applications.

We welcome contributions on a range of trans-disciplinary research issues related to these research challenges, including:

- Information Retrieval techniques in the speech domain (e.g., applied to speech recognition lattices)
- Multimodal search techniques exploiting speech transcripts (audio/speech/video fusion techniques including re-ranking)
- Search effectiveness (e.g., evidence combination, query/document expansion)
- Exploitation of audio analysis (e.g., speaker?s emotional state, speaker characteristics, speaking style)
- Integration of higher level semantics, including topic segmentation and cross-modal concept detection
- Spoken natural language queries
- Large-scale speech indexing approaches (e.g., collection size, search speed)
- Multilingual settings (e.g., multilingual collections, cross-language access)
- Advanced interfaces for results display and playback of multimedia with a speech track
- Exploiting user contributed information, including tags, rating and user community structure
- Affordable, light-weight solutions for small collections, i.e., for the long tail

Contributions for oral presentations (short papers of 4 pages or long papers of 6 pages) and demonstration papers (4 pages) will be accepted. The submission deadline is 10 June 2010. For further information see the website: http://www.searchingspeech.org/

At this time, we area also pre-announcing a special issue of ACM Transactions on Information Systems on the topic of searching spontaneous conversational speech. The special issue is based on the SSCS workshop series, but will involve a separate call for papers. We will especially encourage the authors of the best papers from SSCS 2010 to submit to the special issue call.

SSCS 2010 Organizers
Martha Larson, Delft University of Technology, Netherlands
Roeland Ordelman, Sound & Vision and Uni. of Twente, Netherlands
Florian Metze, Carnegie Mellon University, USA
Franciska de Jong, University of Twente, Netherlands
Wessel Kraaij, TNO and Radboud University, Netherlands

Related Resources

ICAIP 2022   ACM--2022 6th International Conference on Advances in Image Processing (ICAIP 2022)
AIMLNET 2022   2nd International conference on AI, Machine Learning in Communications and Networks
ICASSP 2022   2022 IEEE International Conference on Acoustics, Speech, & Signal Processing
MLDS 2022   3rd International Conference on Machine Learning Techniques and Data Science
ICASSP 2023   2023 IEEE International Conference on Acoustics, Speech, and Signal Processing
ECIR 2023   45th European Conference on Information Retrieval
IOP, EI, Scopus-PRECE 2022   2022 International Conference on Power, Renewable Energy and Control Engineering (PRECE 2022)-EI Compendex
ICNLSP 2022   5th International Conference on Natural Language and Speech Processing
IJWesT 2022   International Journal of Web & Semantic Technology
BigTMS&AI@ICCCI 2022   Special Session on Big Text Mining Searching & Artificial Intelligence @ICCCI