posted by organizer: alborzg || 4183 views || tracked by 2 users: [display]

RL4RealLife 2019 : Reinforcement Learning for Real Life Workshop at ICML 2019

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/RL4RealLife
 
When Jun 14, 2019 - Jun 14, 2019
Where Long Beach, California
Submission Deadline May 1, 2019
Notification Due May 15, 2019
Final Version Due May 30, 2019
Categories    reinforcement learning
 

Call For Papers

Reinforcement learning (RL) is a general learning, predicting, and decision making paradigm. RL provides solution methods for sequential decision making problems as well as those can be transformed into sequential ones. RL connects deeply with optimization, statistics, game theory, causal inference, sequential experimentation, etc., overlaps largely with approximate dynamic programming and optimal control, and applies broadly in science, engineering and arts.

RL has been making steady progress in academia recently, e.g., Atari games, AlphaGo, visuomotor policies for robots. RL has also been applied to real world scenarios like recommender systems and neural architecture search. See a recent collection about RL applications. It is desirable to have RL systems that work in the real world with real benefits. However, there are many issues for RL though, e.g. generalization, sample efficiency, and exploration vs. exploitation dilemma. Consequently, RL is far from being widely deployed. Common, critical and pressing questions for the RL community are then: Will RL have wide deployments? What are the issues? How to solve them?

The goal of this workshop is to bring together researchers and practitioners from industry and academia interested in addressing practical and/or theoretical issues in applying RL to real life scenarios, review state of the arts, clarify impactful research problems, brainstorm open challenges, share first-hand lessons and experiences from real life deployments, summarize what has worked and what has not, collect tips for people from industry looking to apply RL and RL experts interested in applying their methods to real domains, identify potential opportunities, generate new ideas for future lines of research and development, and promote awareness and collaboration. This is not "yet another RL workshop": it is about how to successfully apply RL to real life applications. This is a less addressed issue in the RL/ML/AI community, and calls for immediate attention for sustainable prosperity of RL research and development.

The main goals of the workshop are to: (1) have experts share their successful stories of applying RL to real-world problems; and (2) identify research sub-areas critical for real-world applications such as reliable evaluation, benchmarking, and safety/robustness.

We invite paper submissions successfully applying RL and relevant algorithms to real life RL applications by addressing relevant RL issues. Under the central theme of making RL work in real life scenarios, no further constraints are set, to facilitate open discussions and to foster the most potential creativity and imagination from the community. We will prioritize work that propose interesting and impactful contributions. Our technical topics of interest are general, including but not limited to concrete topics below:

RL and relevant algorithms: value-based, policy-based, model-free, model-based, online, offline, on-policy, off-policy, hierarchical, multi-agent, relational, multi-armed bandit, (linear, nonlinear, deep/neural, symbolic) representation learning, unsupervised learning, self-supervised learning, transfer learning, sim-to-real, multi-task learning, meta-learning, imitation learning, continual learning, causal inference, and reasoning;

Issues: generalization, deadly triad, sample/time/space efficiency, exploration vs. exploitation, reward specification, stability, convergence, scalability, model-based learning (model validation and model error estimation), prior knowledge, safety, interpretability, reproducibility, hyper-parameters tuning, and boilerplate code;

Applications: recommender systems, advertisements, conversational AI, business, finance, healthcare, education, robotics, autonomous driving, transportation, energy, chemical synthesis, drug design, industry control, drawing, music, and other problems in science, engineering and arts.
We warmly welcome position papers.

Speakers / Panelists

- Pieter Abbeel (Berkeley, covariant.ai)
- Craig Boutilier (Google AI)
- Emma Brunskill (Stanford)
- John Langford (Microsoft Research)
- David Silver (Deepmind)
- David Sontag (MIT)

Organizers

- Alborz Geramifard (Facebook AI)
- Lihong Li (Google AI)
- Yuxi Li (Attain.ai)
- Csaba Szepesvari (Deepmind & U. of Alberta)
- Tao Wang (Apple)

Related Resources

MALTA 2025   AAAI-2025 Workshop: Multi-Agent Reinforcement Learning for Transportation Autonomy
ICML 2024   International Conference on Machine Learning
MHF-ICML 2024   ICML Workshop Models of Human Feedback for AI Alignment
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
ICSTTE 2025   2025 3rd International Conference on SmartRail, Traffic and Transportation Engineering (ICSTTE 2025)
DARLI-AP 2024   8th International Workshop on Data Analytics Solutions for Real-Life Applications
MLPRIS 2025   The 7th Int'l Conference on Machine Learning, Pattern Recognition and Intelligent Systems
IEEE-EI/Scopus-IECA 2025   2025 2nd International Conference on Informatics Education and Computer Technology Applications -IEEE Xplore/EI/Scopus
CVAI 2026   2026 International Symposium on Computer Vision and Artificial Intelligence (CVAI 2026)
21st AIAI 2025   21st (AIAI) Artificial Intelligence Applications and Innovations