| |||||||||||||||
NTIRE 2021 : CVPR 2021- New Trends in Image Restoration and Enhancement workshop and challenges | |||||||||||||||
Link: https://data.vision.ee.ethz.ch/cvl/ntire21/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
NTIRE: 6th New Trends in Image Restoration and Enhancement workshop and challenges 2021
In conjunction with CVPR 2021 https://data.vision.ee.ethz.ch/cvl/ntire21/ Contact: radu.timofte [at] vision.ee.ethz.ch Scope Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved. Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image restoration, enhancement and manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods. This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations. This workshop builds upon the success of the previous NTIRE editions: at CVPR 2017, 2018, 2019 and 2020 and at ACCV 2016. Moreover, it relies on all the people associated with the CLIC 2018, 2019, 2020 , PIRM 2018, AIM 2019, 2020 and NTIRE events such as organizers, PC members, distinguished speakers, authors of published paper, challenge participants and winning teams. Topics Papers addressing topics related to image restoration, enhancement and manipulation are invited. The topics include, but are not limited to: ● Image/video inpainting ● Image/video deblurring ● Image/video denoising ● Image/video upsampling and super-resolution ● Image/video filtering ● Image/video de-hazing, de-raining, de-snowing, etc. ● Demosaicing ● Image/video compression ● Removal of artifacts, shadows, glare and reflections, etc. ● Image/video enhancement: brightening, color adjustment, sharpening, etc. ● Style transfer ● Hyperspectral imaging ● Underwater imaging ● Methods robust to changing weather conditions / adverse outdoor conditions ● Image/video restoration, enhancement, manipulation on constrained settings ● Image/video processing on mobile devices ● Visual domain translation ● Multimodal translation ● Perceptual enhancement ● Perceptual manipulation ● Image/video generation and hallucination ● Image/video quality assessment ● Image/video semantic segmentation, depth estimation ● Studies and applications of the above. Submission A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. The paper format must follow the same guidelines as for all CVPR submissions. http://cvpr2021.thecvf.com/node/33 The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors. Dual submission is allowed with CVPR main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop. For the paper submissions, please go to the online submission site https://cmt3.research.microsoft.com/NTIRE2021 Accepted and presented papers will be published after the conference in the CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and Computer Vision Foundation (www.cv-foundation.org). The author kit provides a LaTeX2e template for paper submissions. Please refer to the example for detailed formatting instructions. If you use a different document processing system then see the CVPR author instruction page. Author Kit: http://cvpr2021.thecvf.com/sites/default/files/2020-09/cvpr2021AuthorKit_2.zip Workshop Dates ● Regular Papers Submission Deadline: March 15, 2021 (EXTENDED) ● Challenge Papers Submission Deadline: April 02, 2021 ● Decisions: April 08, 2021 ● Camera Ready Deadline: April 15, 2021 NTIRE 2021 has the following associated groups of challenges (ONGOING!): -image challenges: ● defocus deblurring using dual-pixel images ● depth guided relighting (one-to-one and any-to-any) ● aerial images ● super-resolution ● perceptual image quality assessment ● deblurring (low res and Jpeg artifacts) ● dehazing (nonhomogeneous haze) -video challenges: ● enhancement of compressed videos (fixed QP, fidelity & perceptual, fixed bit-rate) ● super-resolution (spatial and spatio-temporal) ● burst super-resolution (synthetic and real data) ● High Dynamic Range (HDR) PARTICIPATION To learn more about the challenges and to participate: https://data.vision.ee.ethz.ch/cvl/ntire21/ Challenges Dates ● Release of train data: January 01, 2021 ● Validation server online: January 05, 2021 ● Competitions end: March 20, 2021 Organizers ● Radu Timofte, ETH Zurich ● Shuhang Gu, OPPO & University of Sydney ● Lei Zhang, Alibaba & The Hong Kong Polytechnic University ● Ming-Hsuan Yang, University of California at Merced & Google ● Andreas Lugmayr, ETH Zurich ● Martin Danelljan, ETH Zurich ● Cosmin Ancuti, Université catholique de Louvain (UCL) ● Codruta O. Ancuti, University Politehnica Timisoara ● Kyoung Mu Lee, Seoul National University ● Michael S. Brown, York University ● Eli Shechtman, Creative Intelligence Lab at Adobe Research ● Seungjun Nah, Seoul National University, Korea ● Abdullah Abuolaim, York University, Canada ● Eduardo Perez-Pellitero, Huawei Noah's Ark Lab, UK ● Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham ● Sanghyun Son, Seoul National University ● Suyoung Lee, Seoul National University ● Ren Yang, ETH Zurich ● Ruofan Zhou, EPFL ● Majed El Helou, EPFL ● Sabine Süsstrunk, EPFL ● Chao Dong, SIAT ● Jimmy Ren, SenseTime ● Oliver Nina, AF Research Lab ● Bob Lee, Wright Brothers Institute ● Jinjin Gu, University of Sydney ● Luc Van Gool, KU Leuven and ETH Zurich Program Committee (to be updated) Cosmin Ancuti, Universitatea Politehnica Timisoara, Romania Nick Barnes, Data61, Australia Michael S. Brown, York University, Canada Subhasis Chaudhuri, IIT Bombay, India Sunghyun Cho, Samsung Christophe De Vleeschouwer, Université catholique de Louvain (UCL), Belgium Chao Dong, SenseTime Weisheng Dong, Xidian University, China Alexey Dosovitskiy, Intel Labs Touradj Ebrahimi, EPFL, Switzerland Michael Elad, Technion, Israel Corneliu Florea, University Politehnica of Bucharest, Romania Alessandro Foi, Tampere University of Technology, Finland Peter Gehler, University of Tübingen, MPI Intelligent Systems, Amazon, Germany Bastian Goldluecke, University of Konstanz, Germany Luc Van Gool, ETH Zürich and KU Leuven, Belgium Shuhang Gu, ETH Zürich, Switzerland Michael Hirsch, Amazon Hiroto Honda, DeNA Co., Japan Jia-Bin Huang, Virginia Tech, US Michal Irani, Weizmann Institute, Israel Phillip Isola, UC Berkeley, US Zhe Hu, Light.co Sing Bing Kang, Microsoft Research, US Jan Kautz, NVIDIA Research, US Seon Joo Kim, Yonsei University, Korea Vivek Kwatra, Google In So Kweon, KAIST, Korea Christian Ledig, Twitter Inc. Kyoung Mu Lee, Seoul National University, South Korea Seungyong Lee, POSTECH, South Korea Stephen Lin, Microsoft Research Asia Chen Change Loy, Chinese University of Hong Kong Vladimir Lukin, National Aerospace University, Ukraine Kai-Kuang Ma, Nanyang Technological University, Singapore Vasile Manta, Technical University of Iasi, Romania Yasuyuki Matsushita, Osaka University, Japan Peyman Milanfar, Google and UCSC, US Rafael Molina Soriano, University of Granada, Spain Yusuke Monno, Tokyo Institute of Technology, Japan Hajime Nagahara, Osaka University, Japan Vinay P. Namboodiri, IIT Kanpur, India Sebastian Nowozin, Microsoft Research Cambridge, UK Federico Perazzi, Disney Research Aleksandra Pizurica, Ghent University, Belgium Sylvain Paris, Adobe Fatih Porikli, Australian National University, NICTA, Australia Hayder Radha, Michigan State University, US Tobias Ritschel, University College London, UK Antonio Robles-Kelly, CSIRO, Australia Stefan Roth, TU Darmstadt, Germany Aline Roumy, INRIA, France Jordi Salvador, Amazon, US Yoichi Sato, University of Tokyo, Japan Konrad Schindler, ETH Zurich, Switzerland Samuel Schulter, NEC Labs America Nicu Sebe, University of Trento, Italy Eli Shechtman, Adobe Research, US Boxin Shi, National Institute of Advanced Industrial Science and Technology (AIST), Japan Wenzhe Shi, Twitter Inc. Alexander Sorkine-Hornung, Disney Research Sabine Süsstrunk, EPFL, Switzerland Yu-Wing Tai, Tencent Youtu Hugues Talbot, Université Paris Est, France Robby T. Tan, Yale-NUS College, Singapore Masayuki Tanaka, Tokyo Institute of Technology, Japan Jean-Philippe Tarel, IFSTTAR, France Radu Timofte, ETH Zürich, Switzerland George Toderici, Google, US Ashok Veeraraghavan, Rice University, US Jue Wang, Megvii Research, US Chih-Yuan Yang, UC Merced, US Jianchao Yang, Snapchat Ming-Hsuan Yang, University of California at Merced, US Qingxiong Yang, Didi Chuxing, China Jong Chul Ye, KAIST, Korea Jason Yosinski, Uber AI Labs, US Wenjun Zeng, Microsoft Research Lei Zhang, The Hong Kong Polytechnic University Wangmeng Zuo, Harbin Institute of Technology, China Speakers (TBU) Alan Bovik, University of Texas at Austin Qi Tian, Huawei Cloud AI Christian Theobalt, Max Planck Institute for Informatics, Saarland University Wangmeng Zuo, Harbin Institute of Technology Federico Perazzi & Rakesh Ranjan, Facebook Reality Labs Sponsors (TBU) Facebook Reality Labs Huawei Noah's Ark Wright Brothers Institute OPPO MediaTek ETH Zurich / CVL Contact Email: radu.timofte [at] vision.ee.ethz.ch Website: https://data.vision.ee.ethz.ch/cvl/ntire21/ |
|