| |||||||||||||||
SLRTP@ECCV 2020 : ECCV 2020 Sign Language Recognition, Translation & Production (SLRTP) Workshop | |||||||||||||||
Link: http://slrtp.com/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
ECCV 2020 Sign Language Recognition, Translation & Production (SLRTP) Workshop
Dates: - Paper submission: July 6, 2020 - extended to July 19, 2020 - Notification of acceptance: July 26, 2020 - Workshop date: August 23, 2020 (virtual) - Camera ready: September 15, 2020 This workshop brings together researchers working on different aspects of vision-based sign language research (including body posture, hands and face) and sign language linguists. The aims are to increase the linguistic understanding of sign languages within the computer vision community, and also to identify the strengths and limitations of current work and the problems that need solving. Finally, we hope that the workshop will cultivate future collaborations. Recent developments in image captioning, visual question answering and visual dialogue have stimulated significant interest in approaches that fuse visual and linguistic modelling. As spatio-temporal linguistic constructs, sign languages represent a unique challenge where vision and language meet. Computer vision researchers have been studying sign languages in isolated recognition scenarios for the last three decades. However, now that large scale continuous corpora are beginning to become available, research has moved towards continuous sign language recognition. More recently, the new frontier has become sign language translation and production where new developments in generative models are enabling translation between spoken/written language and continuous sign language videos, and vice versa. In this workshop, we propose to bring together researchers to discuss the open challenges that lie at the intersection of sign language and computer vision. In this workshop, we propose to bring together researchers to discuss the open challenges that lie at the intersection of sign language and computer vision. Confirmed Speakers: - Lale Akarun, Bogazici University - Matt Huenerfauth, Rochester Institute of Technology - Oscar Koller, Microsoft - Bencie Woll, Deafness Cognition and Language Research Centre (DCAL), University College London Call for Papers: Papers can be submitted to CMT at https://cmt3.research.microsoft.com/SLRTP2020/ by the end of July 6 (Anywhere on Earth). We are happy to receive submissions for both new work as well as work which has been accepted to other venues. In line with the Sign Language Linguistics Society (SLLS) Ethics Statement for Sign Language Research, we encourage submissions from Deaf researchers or from teams which include Deaf individuals, particularly as co-authors but also in other roles (advisor, research assistant, etc). Suggested topics for contributions include, but are not limited to: - Continuous Sign Language Recognition and Analysis - Multi-modal Sign Language Recognition and Translation - Generative Models for Sign Language Production - Non-manual Features and Facial Expression Recognition for Sign Language - Hand Shape Recognition - Lip-reading/speechreading - Sign Language Recognition and Translation Corpora - Semi-automatic Corpora Annotation Tools - Human Pose Estimation Paper Format & Proceedings: See our webpage slrtp.com for the detailed information. Workshop languages/accessibility: The languages of this workshop are English, British Sign Language (BSL) and American Sign Language (ASL). Interpretation between BSL/English and ASL/English will be provided, as will English subtitles, for all pre-recorded and live Q&A sessions. If you have questions about this, please contact dcal@ucl.ac.uk. Organizers: Necati Cihan Camgoz, University of Surrey Gul Varol, University of Oxford Samuel Albanie, University of Oxford Richard Bowden, University of Surrey Andrew Zisserman, University of Oxford Kearsy Cormier, DCAL |
|