posted by organizer: alborzg || 4199 views || tracked by 2 users: [display]

RL4RealLife 2019 : Reinforcement Learning for Real Life Workshop at ICML 2019

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/RL4RealLife
 
When Jun 14, 2019 - Jun 14, 2019
Where Long Beach, California
Submission Deadline May 1, 2019
Notification Due May 15, 2019
Final Version Due May 30, 2019
Categories    reinforcement learning
 

Call For Papers

Reinforcement learning (RL) is a general learning, predicting, and decision making paradigm. RL provides solution methods for sequential decision making problems as well as those can be transformed into sequential ones. RL connects deeply with optimization, statistics, game theory, causal inference, sequential experimentation, etc., overlaps largely with approximate dynamic programming and optimal control, and applies broadly in science, engineering and arts.

RL has been making steady progress in academia recently, e.g., Atari games, AlphaGo, visuomotor policies for robots. RL has also been applied to real world scenarios like recommender systems and neural architecture search. See a recent collection about RL applications. It is desirable to have RL systems that work in the real world with real benefits. However, there are many issues for RL though, e.g. generalization, sample efficiency, and exploration vs. exploitation dilemma. Consequently, RL is far from being widely deployed. Common, critical and pressing questions for the RL community are then: Will RL have wide deployments? What are the issues? How to solve them?

The goal of this workshop is to bring together researchers and practitioners from industry and academia interested in addressing practical and/or theoretical issues in applying RL to real life scenarios, review state of the arts, clarify impactful research problems, brainstorm open challenges, share first-hand lessons and experiences from real life deployments, summarize what has worked and what has not, collect tips for people from industry looking to apply RL and RL experts interested in applying their methods to real domains, identify potential opportunities, generate new ideas for future lines of research and development, and promote awareness and collaboration. This is not "yet another RL workshop": it is about how to successfully apply RL to real life applications. This is a less addressed issue in the RL/ML/AI community, and calls for immediate attention for sustainable prosperity of RL research and development.

The main goals of the workshop are to: (1) have experts share their successful stories of applying RL to real-world problems; and (2) identify research sub-areas critical for real-world applications such as reliable evaluation, benchmarking, and safety/robustness.

We invite paper submissions successfully applying RL and relevant algorithms to real life RL applications by addressing relevant RL issues. Under the central theme of making RL work in real life scenarios, no further constraints are set, to facilitate open discussions and to foster the most potential creativity and imagination from the community. We will prioritize work that propose interesting and impactful contributions. Our technical topics of interest are general, including but not limited to concrete topics below:

RL and relevant algorithms: value-based, policy-based, model-free, model-based, online, offline, on-policy, off-policy, hierarchical, multi-agent, relational, multi-armed bandit, (linear, nonlinear, deep/neural, symbolic) representation learning, unsupervised learning, self-supervised learning, transfer learning, sim-to-real, multi-task learning, meta-learning, imitation learning, continual learning, causal inference, and reasoning;

Issues: generalization, deadly triad, sample/time/space efficiency, exploration vs. exploitation, reward specification, stability, convergence, scalability, model-based learning (model validation and model error estimation), prior knowledge, safety, interpretability, reproducibility, hyper-parameters tuning, and boilerplate code;

Applications: recommender systems, advertisements, conversational AI, business, finance, healthcare, education, robotics, autonomous driving, transportation, energy, chemical synthesis, drug design, industry control, drawing, music, and other problems in science, engineering and arts.
We warmly welcome position papers.

Speakers / Panelists

- Pieter Abbeel (Berkeley, covariant.ai)
- Craig Boutilier (Google AI)
- Emma Brunskill (Stanford)
- John Langford (Microsoft Research)
- David Silver (Deepmind)
- David Sontag (MIT)

Organizers

- Alborz Geramifard (Facebook AI)
- Lihong Li (Google AI)
- Yuxi Li (Attain.ai)
- Csaba Szepesvari (Deepmind & U. of Alberta)
- Tao Wang (Apple)

Related Resources

ICML 2024   International Conference on Machine Learning
MHF-ICML 2024   ICML Workshop Models of Human Feedback for AI Alignment
IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
DARLI-AP 2024   8th International Workshop on Data Analytics Solutions for Real-Life Applications
MALTA 2025   AAAI-2025 Workshop: Multi-Agent Reinforcement Learning for Transportation Autonomy
IEEE-Ei/Scopus-CWCBD 2025   2025 6th International Conference on Wireless Communications and Big Data (CWCBD 2025) -EI Compendex
CETA--EI 2025   2025 4th International Conference on Computer Engineering, Technologies and Applications (CETA 2025)