posted by organizer: RaduTimofte || 15820 views || tracked by 12 users: [display]

NTIRE 2017 : CVPR 2017- New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution

FacebookTwitterLinkedInGoogle

Link: http://www.vision.ee.ethz.ch/ntire17/
 
When Jul 21, 2017 - Jul 21, 2017
Where Honolulu, Hawaii
Submission Deadline Apr 24, 2017
Notification Due May 8, 2017
Final Version Due May 18, 2017
Categories    image processing   computer science   computer vision   machine learning
 

Call For Papers

NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution 2017
In conjunction with CVPR 2017

Website: http://www.vision.ee.ethz.ch/ntire17/
Contact: radu.timofte [at] vision.ee.ethz.ch

Scope

Image restoration and image enhancement are key computer vision tasks, aiming at the restoration of degraded image content or the filling in of missing information. Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, with image restoration or enhancement serving as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

Topics

Papers addressing topics related to image restoration and enhancement are invited. The topics include, but are not limited to:

● Image inpainting
● Image deblurring
● Image denoising
● Image upsampling and super-resolution
● Image filtering
● Image dehazing
● Demosaicing
● Image enhancement: brightening, color adjustment, sharpening, etc.
● Style transfer
● Image generation and image hallucination
● Image-quality assessment
● Video restoration and enhancement
● Hyperspectral imaging
● Methods robust to changing weather conditions
● Studies and applications of the above.

Submission

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. The paper format must follow the same guidelines as for all CVPR submissions.
http://cvpr2017.thecvf.com/submission/main_conference/author_guidelines
The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.
Dual submission is allowed with CVPR main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

For the paper submissions, please go to the online submission site.
https://cmt3.research.microsoft.com/NTIRE2017

Accepted and presented papers will be published after the conference in the CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and Computer Vision Foundation (www.cv-foundation.org).

The author kit provides a LaTeX2e template for paper submissions. Please refer to the example for detailed formatting instructions. If you use a different document processing system then see the CVPR author instruction page.

Author Kit: http://cvpr2017.thecvf.com/files/cvpr2017AuthorKit.zip

Workshop Dates

● Submission Deadline: April 24, 2017 (extended!)
● Decisions: May 08, 2017
● Camera Ready Deadline: May 18, 2017


Challenge on Example-based Single-Image Super-Resolution

In order to gauge the current state-of-the-art in example-based single-image super-resolution, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2017 conference. We propose a large DIV2K dataset with DIVerse 2K resolution images.

The challenge has 2 tracks:
● Track 1: bicubic uses the bicubic downscaling (Matlab imresize), one of the most common settings from the recent single-image super-resolution literature.
● Track 2: unknown assumes that the explicit forms for the degradation operators are unknown, only the training pairs of low and high images are available.

To learn more about the challenge, to participate in the challenge, and to access the newly collected DIV2K dataset with DIVerse 2K resolution images everybody is invited to register at the links from:
http://www.vision.ee.ethz.ch/ntire17/

The training data is already made available to the registered participants.

Challenge Dates

● Release of train data: February 14, 2017
● Validation server online: February 25, 2017
● Competition ends: April 16, 2017 (extended!)


Organizers

● Radu Timofte, ETH Zurich, Switzerland (radu.timofte [at] vision.ee.ethz.ch)
● Ming-Hsuan Yang, University of California at Merced, US (mhyang [at] ucmerced.edu)
● Eirikur Agustsson, ETH Zurich, Switzerland (eirikur.agustsson [at] vision.ee.ethz.ch)
● Lei Zhang, The Hong Kong Polytechnic University (cslzhang [at] polyu.edu.hk)
● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland (vangool [at] vision.ee.ethz.ch)


Program Committee

Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium
Michael S. Brown, York University, Canada
Subhasis Chaudhuri, IIT Bombay, India
Sunghyun Cho, Samsung
Oliver Cossairt, Northwestern University, US
Chao Dong, SenseTime
Weisheng Dong, Xidian University, China
Alexey Dosovitskiy, Intel Labs
Touradj Ebrahimi, EPFL, Switzerland
Michael Elad, Technion, Israel
Corneliu Florea, University Politehnica of Bucharest, Romania
Alessandro Foi, Tampere University of Technology, Finland
Bastian Goldluecke, University of Konstanz, Germany
Luc Van Gool, ETH Zürich and KU Leuven, Belgium
Peter Gehler, University of Tübingen and MPI Intelligent Systems, Germany
Hiroto Honda, DeNA Co., Japan
Michal Irani, Weizmann Institute, Israel
Phillip Isola, UC Berkeley, US
Zhe Hu, Light.co
Sing Bing Kang, Microsoft Research, US
Vivek Kwatra, Google
Kyoung Mu Lee, Seoul National University, South Korea
Seungyong Lee, POSTECH, South Korea
Stephen Lin, Microsoft Research Asia
Chen Change Loy, Chinese University of Hong Kong
Vladimir Lukin, National Aerospace University, Ukraine
Kai-Kuang Ma, Nanyang Technological University, Singapore
Vasile Manta, Technical University of Iasi, Romania
Yasuyuki Matsushita, Osaka University, Japan
Peyman Milanfar, Google and UCSC, US
Rafael Molina Soriano, University of Granada, Spain
Yusuke Monno, Tokyo Institute of Technology, Japan
Hajime Nagahara, Kyushu University, Japan
Vinay P. Namboodiri, IIT Kanpur, India
Sebastian Nowozin, Microsoft Research Cambridge, UK
Aleksandra Pizurica, Ghent University, Belgium
Fatih Porikli, Australian National University, NICTA, Australia
Hayder Radha, Michigan State University, US
Stefan Roth, TU Darmstadt, Germany
Aline Roumy, INRIA, France
Jordi Salvador, Amazon, US
Yoichi Sato, University of Tokyo, Japan
Samuel Schulter, NEC Labs America
Nicu Sebe, University of Trento, Italy
Boxin Shi, National Institute of Advanced Industrial Science and Technology (AIST), Japan
Wenzhe Shi, Twitter Inc.
Alexander Sorkine-Hornung, Disney Research
Sabine Süsstrunk, EPFL, Switzerland
Yu-Wing Tai, Tencent Youtu
Hugues Talbot, Université Paris Est, France
Robby T. Tan, Yale-NUS College, Singapore
Masayuki Tanaka, Tokyo Institute of Technology, Japan
Jean-Philippe Tarel, IFSTTAR, France
Radu Timofte, ETH Zürich, Switzerland
Ashok Veeraraghavan, Rice University, US
Jue Wang, Megvii Research, US
Chih-Yuan Yang, UC Merced, US
Ming-Hsuan Yang, University of California at Merced, US
Qingxiong Yang, Didi Chuxing, China
Lei Zhang, The Hong Kong Polytechnic University
Wangmeng Zuo, Harbin Institute of Technology, China


Speakers

Alexei Efros, UC Berkeley, US
Jan Kautz, NVIDIA
Liang Lin, SenseTime and Sun Yat-Sen University, China
Peyman Milanfar, Google and UC Santa Cruz, US
Eli Shechtman, Adobe
Wenzhe Shi, Twitter Inc.
Sabine Süsstrunk, EPFL, Switzerland


Sponsors

NVIDIA
SenseTime
Twitter Inc
Google


Contact

Email: radu.timofte [at] vision.ee.ethz.ch
Website: http://www.vision.ee.ethz.ch/ntire17/

Related Resources

NTIRE 2024   CVPR 2024 New Trends in Image Restoration and Enhancement workshop and challenges
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
CVPR 2024   The IEEE/CVF Conference on Computer Vision and Pattern Recognition
Ei/Scopus-ACAI 2024   2024 7th International Conference on Algorithms, Computing and Artificial Intelligence(ACAI 2024)
CVPR&ML 2024   CFP - Special Issue New Trends in Computer Vision, Pattern Recognition and Machine Learning
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
NAACL 2025   North American Chapter of the Association for Computational Linguistics
IEEE Big Data - MMAI 2024   IEEE Big Data 2024 Workshop on Multimodal AI
CMVIT-Maldives 2025   2025 9th International Conference on Machine Vision and Information Technology (CMVIT 2025)
PCA 2025   Disasters and Apocalypses: CFP Pop Culture Association