| |||||||||||
PBDL 2021 : 3rd ICCV Workshop on Physics Based Vision meets Deep Learning | |||||||||||
Link: https://pbdl-ws.github.io/pbdl2021/callforpapers.html | |||||||||||
| |||||||||||
Call For Papers | |||||||||||
Following the success of 2nd ICCV Workshop on Physics Based Vision Meets Deep Learning (PBDL2019). We propose the 3rd workshop using the same title and topics with ICCV 2021. The goal is to encourage the interplay between physics based vision and deep learning. Physics based vision aims to invert the processes to recover the scene properties, such as shape, reflectance, light distribution, medium properties, etc., from images. In recent years, deep learning shows promising improvement for various vision tasks. When physics based vision meets deep learning, there must be mutual benefits.
We welcome submissions of new methods in the classic physics based vision problems, but preference will be given to novel insights inspired by utilizing deep learning techniques. Relevant topics include but are not limited to Deep learning + • Photometric 3D reconstruction • Radiometric modeling/calibration of cameras • Color constancy • Illumination analysis and estimation • Reflectance modeling, fitting, and analysis • Forward/inverse renderings • Material recognition and classification • Transparency and multi-layer imaging • Reflection removal • Intrinsic image decomposition • Light field imaging • Multispectral/hyperspectral capture, modeling and analysis • Vision in bad weather (dehaze, derain, etc.) • Structured light techniques (sensors, BRDF measurement and analysis) • TOF sensors and its applications Paper submission is through CMT: https://cmt3.research.microsoft.com/pbdl2021 The format for paper submission is the same as the ICCV 2021 submission format. Papers that violates the anonymity, do not use the ICCV submission template or have more than 8 pages (excluding references) will be rejected without review. The accepted papers will appear in the proceedings of ICCV 2021 workshops. In submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another workshop or conference during the review period. |
|