posted by user: RaduTimofte || 9866 views || tracked by 3 users: [display]

CLIC 2018 : CVPR 2018- Workshop and Challenge on Learned Image Compression

FacebookTwitterLinkedInGoogle

Link: http://www.compression.cc/
 
When Jun 18, 2018 - Jun 18, 2018
Where Salt Lake City, Utah
Submission Deadline Apr 29, 2018
Categories    deep learning   compression   machine learning   signal processing
 

Call For Papers

CLIC: Workshop and Challenge on Learned Image Compression 2018
in conjunction with CVPR 2018

Website: http://www.compression.cc/


Motivation

The domain of image compression has traditionally used approaches discussed in forums such as ICASSP, ICIP and other very specialized venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and challenge will be the first computer-vision event to explicitly focus on these fields. Many techniques discussed at computer-vision meetings have relevance for lossy compression. For example, super-resolution and artifact removal can be viewed as special cases of the lossy compression problem where the encoder is fixed and only the decoder is trained. But also inpainting, colorization, optical flow, generative adversarial networks and other probabilistic models have been used as part of lossy compression pipelines. Lossy compression is therefore a potential topic that can benefit a lot from a large portion of the CVPR community.

Recent advances in machine learning have led to an increased interest in applying neural networks to the problem of compression. At CVPR 2017, for example, one of the oral presentations was discussing compression using recurrent convolutional networks. In order to foster more growth in this area, this workshop will not only try to encourage more development but also establish baselines, educate, and propose a common benchmark and protocol for evaluation. This is crucial, because without a benchmark, a common way to compare methods, it will be very difficult to measure progress.

We propose hosting an image-compression challenge which specifically targets methods which have been traditionally overlooked, with a focus on neural networks (but also welcomes traditional approaches). Such methods typically consist of an encoder subsystem, taking images and producing representations which are more easily compressed than the pixel representation (e.g., it could be a stack of convolutions, producing an integer feature map), which is then followed by an arithmetic coder. The arithmetic coder uses a probabilistic model of integer codes in order to generate a compressed bit stream. The compressed bit stream makes up the file to be stored or transmitted. In order to decompress this bit stream, two additional steps are needed: first, an arithmetic decoder, which has a shared probability model with the encoder. This reconstructs (losslessly) the integers produced by the encoder. The last step consists of another decoder producing a reconstruction of the original image.

In the computer vision community many authors will be familiar with a multitude of configurations which can act as either the encoder and the decoder, but probably few are familiar with the implementation of an arithmetic coder/decoder. As part of our challenge, we therefore will release a reference arithmetic coder/decoder in order to allow the researchers to focus on the parts of the system for which they are experts.
While having a compression algorithm is an interesting feat by itself, it does not mean much unless the results it produces compare well against other similar algorithms and established baselines on realistic benchmarks. In order to ensure realism, we have collected a set of images which represent a much more realistic view of the types of images which are widely available (unlike the well established benchmarks which rely on the images from the Kodak PhotoCD, having a resolution of 768x512, or Tecnick, which has images of around 1.44 megapixels). We will also provide the performance results from current state-of-the-art compression systems as baselines, like WebP and BPG.

Challenge Tasks

We provide two datasets: Dataset P (“professional”) and Dataset M (“mobile”). The datasets are collected to be representative for images commonly used in the wild, containing thousands of images.

The challenge will allow participants to train neural networks or other methods on any amount of data (it should be possible to train on the data we provide, but we expect participants to have access to additional data, such as ImageNet).
Participants will need to submit a decoder executable that can run in the provided docker environment and be capable of decompressing the submission files. We will impose reasonable limitations for compute and memory of the decoder executable.

We will rank participants (and baseline image compression methods – WebP, JPEG 2000, and BPG) based on multiple criteria: (a) decoding speed; (b) proxy perceptual metric (e.g., MS-SSIM Y); and (c) will utilize scores provided by human raters. The overall winner will be decided by a panel, whose goal is to determine the best compromise between runtime performance and bitrate savings.



Regular Paper Track

We will have a short (4 pages) regular paper track, which allows participants to share research ideas related to image compression. In addition to the paper, we will host a poster session during which authors will be able to discuss their work in more detail.
We encourage exploratory research which shows promising results in:
● Lossy image compression
● Quantization (learning to quantize; dealing with quantization in optimization)
● Entropy minimization
● Image super-resolution for compression
● Deblurring
● Compression artifact removal
● Inpainting (and compression by inpainting)
● Generative adversarial networks
● Perceptual metrics optimization and their applications to compression
And in particular, how these topics can improve image compression.


Challenge Paper Track

The challenge task participants are asked to submit materials detailing the algorithms which they submitted as part of the challenge. Furthermore, they are invited to submit a paper detailing their approach for the challenge.



Important Dates

● December 22nd, 2017 Challenge announcement and the training part of the dataset released January 15th, 2018 The validation part of the dataset released, online validation server is made available
● April 15th, 2018 The test set is released
● April 22nd, 2018 The competition closes and participants are expected to have submitted their decoder and compressed images
● April 29th, 2018 Deadline for paper submission
● May 29th, 2018 Release of paper reviews and challenge results


Forum

Please check out the discussion forum of the challenge for announcements and discussions related to the challenge:
https://groups.google.com/forum/#!forum/clic-2018



Speakers:

Ramin Zabih (Google)
Oren Rippel (WaveOne)
Jim Bankoski (Google)
Jens Ohm (RWTH Aachen)
Touradj Ebrahimi (EPFL)


Organizers:

William T. Freeman (MIT/Google)
George Toderici (Google)
Michele Covell (Google)
Wenzhe Shi (Twitter)
Radu Timofte (ETH Zurich)
Lucas Theis (Twitter)
Johannes Ballé (Google)
Eirikur Agustsson (ETH Zurich)
Nick Johnston (Google)


Sponsors:

Google
Twitter
Netflix
Disney
Amazon

Webpage:

http://www.compression.cc/

Related Resources

CVPR 2024   The IEEE/CVF Conference on Computer Vision and Pattern Recognition
ECAI 2024   27th European Conference on Artificial Intelligence
HiPEAC SC 2024   HiPEAC Reproducibility Student Challenge
ICDM 2024   IEEE International Conference on Data Mining
ICVISP 2024   2024 8th International Conference on Vision, Image and Signal Processing (ICVISP 2024)
AIM@EPIA 2024   Artificial Intelligence in Medicine
ADMIT 2024   2024 3rd International Conference on Algorithms, Data Mining, and Information Technology (ADMIT 2024)
ICMLA 2024   23rd International Conference on Machine Learning and Applications
WS1-ICASSP 2024   Deep Neural Network Model Compression, IEEE ICASSP 2024
ACML 2024   16th Asian Conference on Machine Learning