| |||||||||||||||
GoalsRL 2018 : 1st Workshop on Goal Specifications for Reinforcement Learning | |||||||||||||||
Link: https://sites.google.com/view/goalsrl | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
We would like to invite submissions for the 1st Workshop on Goal Specifications for Reinforcement Learning at the Federated AI Meeting 2018. The submission deadline is May 1st 2018. Attached below is the call for papers. We look forward to your submissions.
====================================================================== ICML/IJCAI/AAMAS 2018 Workshop: Goal Specifications for Reinforcement Learning Stockholm, Sweden https://sites.google.com/view/goalsrl ====================================================================== ================== IMPORTANT DATES ================== Paper submission opens: April 1st, 2018 Submission deadline: May 1st, 2018 Author notification: June 1st, 2018 Camera-ready deadline: June 21st, 2018 Workshop: July 13/14/15th, 2018 ================== ABSTRACT ================== Reinforcement Learning (RL) agents traditionally rely on hand-designed scalar rewards to learn how to act. The more complex and diverse environments and tasks become, the more difficult it may be to engineer rewards that elicit desired behavior. Designing rewards in multi-agent settings with adversaries or co-operative allies can be even more complicated. Experiment designers often have a goal in mind and then must reverse engineer a reward function that will likely lead to it. This process can be difficult, especially for non-experts, and is susceptible to reward hacking---unexpected and undesired behavior that achieves high reward but does not capture the essence of what the engineer was trying to achieve. Moreover, hand-designed reward functions may be brittle, as slight changes in the environment may yield large, and potentially unsafe, alterations in agent behavior. The community has addressed these problems through many disparate approaches including reward shaping, intrinsic rewards, hierarchical reinforcement learning, curriculum learning, and transfer learning. Another approach is to avoid designing scalar rewards altogether, and rather focus on designing goals, for example, through inverse reinforcement learning, imitation learning, target images, or multimodal channels such as speech and text. This workshop will consider all topics related to designing goals for reinforcement learning and problems that can arise from ill-defined goals. The submissions can include novel research, open problems in the field, and surveys. We are particularly interested in the topics of reward engineering, reward hacking, interpretability, learning from humans and goal design using multimodal input. ================== AREAS OF INTEREST ================== Problems with reward design - Robust reward functions - Reward hacking - Adversarial attacks on RL agents - Generalizability of reward functions - Communicating learned goals to humans Methods of reward design - Reward engineering - Reward shaping - Intrinsic rewards Methods of learning rewards - Inverse Reinforcement Learning - Interactive learning - Supervised learning - Evolutionary approaches Methods of goal design using: - Target images - Imitation learning - Transfer learning - Curriculum learning - Hierarchical RL - Multimodal input (speech, text, sketches, etc.) - Multi-agent cooperative/competitive learning - Application-related issues and solutions ================== SUBMISSION ================== Submissions will be double-blind and are limited to 4 pages for short papers and 8 pages for full papers, not including references and appendices. Formatting should be in ICML style. Concurrent submissions are allowed, but works that have been accepted at archival venues are discouraged. Submission link: https://easychair.org/conferences/?conf=goalsrl2018 ================== ORGANIZERS ================== Ashley Edwards, Georgia Institute of Technology Himanshu Sahni, Georgia Institute of Technology Kaushik Subramanian, Cogitai Charles Isbell, Georgia Institute of Technology Michael Littman, Brown University ================== CONTACT ================== Please address questions to: goalsrl2018@easychair.org |
|