| |||||||||||||||
IMM 2009 : 1st International Workshop on Internet Multimedia Mining | |||||||||||||||
Link: http://research.microsoft.com/~xshua/imm2009 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
--------------------------------------------------------------------
*CALL FOR PAPERS* 1st International Workshop on: *INTERNET MULTIMEDIA MINING* (with dataset support) http://research.microsoft.com/~xshua/imm2009 In conjunction with IEEE International Conference on Data Mining 2009 December 6, Miami, Florida, USA --------------------------------------------------------------------- ------------------------- *Aims and scope* ------------------------- With the explosion of video and image data available on the Internet, online multimedia applications become more and more important. Moreover, mining semantics and other useful information from large-scale Internet multimedia data to facilitate online and local multimedia content analysis, search and other related applications has also gained more and more attention from both academia and industry. On the one hand, the rapid increase of online multimedia data brings new challenges to multimedia content analysis, multimedia retrieval and related multimedia applications, especially in scalability. Both computation cost and performance of many existing techniques are far from satisfactory. On the other hand, Internet also provides us with new opportunities to attack these challenges as well as conventional problems encountered in multimedia mining, content analysis, image/video understanding and computer vision. That is, the massive associated metadata, context and social information available on the Internet, as well as the massive grassroots Internet users, are invaluable resources that can be leveraged to solve the aforementioned difficulties. Recently more and more researchers are realizing both the challenges and the opportunities for multimedia research brought by the Internet. This workshop aims at bringing together high-quality and novel research works on "Internet Multimedia Mining". One of the major obstacles of "Internet Multimedia Mining" research is the difficulty in forming a "good" dataset for algorithm developing, system prototyping and performance evaluation. Together with this workshop, we release a benchmark dataset, which is based on real Internet multimedia data and real Internet multimedia search engines. Submissions to this workshop are encouraged to use this dataset, but papers/demos working on other Internet-based datasets are also welcome. ------------------------- *MSRA Multimedia Dataset* ------------------------- MSRA-MM Version 1 dataset is ready for shipping. Detailed information about the dataset can be found at: http://research.microsoft.com/apps/pubs/default.aspx?id=79942. Please contact the dataset chair (Meng Wang: mengwang@microsoft.com) to request the data. MSRA-MM Version 2 dataset (10 times larger with more metadata) will be ready around June 15. Please submit your request to the dataset chair. ------------------------- *Topics of Interest* ------------------------- Topics of interest for this workshop include, but are not limited to: * Internet video/image/audio annotation, classification, tagging, search ranking and reranking by combining textual description and video/image content. * General video/image/audio annotation, classification, tagging, search ranking and reranking by exploiting Internet data and/or users. Approaches which can handle large-scale data/users are more preferred. * Video/image/audio processing and analyses using Internet data as a knowledge base. * Social media processing, such as online media authoring and sharing, tag recommendation, tag filtering, tag ranking, and search ranking based on image/video/audio social context. * Knowledge mining from Internet multimedia data, such as mining semantic distance of keywords or images, mining video/image/audio copy relationships (e.g., given a video/image/audio, to find all videos/images on the Internet that have the same content with the video/image, either entirely or partially), mining trends of multimedia consuming/sharing, mining knowledge (for example, "photo encyclopedia"", from massive amount of multimedia content on the Internet, etc. * Web-scale content-based multimedia retrieval (for example, approaches based on large-scale high-dimensional feature indexing). * Other online multimedia mining applications, such as multimedia advertising, multimedia recommendation, as well as location/GPS/geography-enabled multimedia, multimedia sensor network over the Internet, etc. ------------------------- *Paper Submission* ------------------------- We accept two forms of submissions: regular full papers and demonstrations. Regular submissions for this workshop are required to use the same format as regular ICDM long papers (a maximum of 10 pages in the IEEE 2-column format). And demonstration submission requires a 1- or 2-page demo description. We especially encourage long-paper authors to submit a demo also. All submissions will be peer-reviewed by at least 3 members of the program committee. Extended version of selected papers will be invited to submit to a special issue of a top journal in data mining or multimedia area. Paper submission site: https://cmt.research.microsoft.com/imm2009/ ------------------------- *Awards* ------------------------- The workshop will present two awards: a best paper award and a best demonstration award, judged by a separate awards committee. ------------------------- *Important Dates* ------------------------- * August 8: Submission of full paper * August 29: Submission of demo paper * September 8: Notification of Acceptance * September 28: Camera-Ready Paper Due * December 6: Workshop ------------------------- *Organizing Committee* ------------------------- Workshop Co-Chairs * Xian-Sheng Hua, Microsoft Research Asia, China * Cees G.M. Snoek, University of Amsterdam, The Netherlands * Zhi-Hua Zhou, Nanjing University, China Dataset Chair * Meng Wang, Microsoft Research Asia, China ----------------------------- Best regards, Xian-Sheng, Zhi-Hua and Cees. |
|