posted by user: aedwards8 || 2398 views || tracked by 2 users: [display]

GoalsRL 2018 : 1st Workshop on Goal Specifications for Reinforcement Learning

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/goalsrl
 
When Jul 13, 2018 - Jul 15, 2018
Where Stockholm, Sweden
Submission Deadline May 1, 2018
Notification Due Jun 1, 2018
Final Version Due Jun 21, 2018
Categories    reinforcement learning   reward engineering   imitation learning   reward shaping
 

Call For Papers

We would like to invite submissions for the 1st Workshop on Goal Specifications for Reinforcement Learning at the Federated AI Meeting 2018. The submission deadline is May 1st 2018. Attached below is the call for papers. We look forward to your submissions.

======================================================================
ICML/IJCAI/AAMAS 2018 Workshop: Goal Specifications for Reinforcement Learning
Stockholm, Sweden
https://sites.google.com/view/goalsrl
======================================================================

==================
IMPORTANT DATES
==================
Paper submission opens: April 1st, 2018
Submission deadline: May 1st, 2018
Author notification: June 1st, 2018
Camera-ready deadline: June 21st, 2018
Workshop: July 13/14/15th, 2018

==================
ABSTRACT
==================
Reinforcement Learning (RL) agents traditionally rely on hand-designed scalar rewards to learn how to act. The more complex and diverse environments and tasks become, the more difficult it may be to engineer rewards that elicit desired behavior. Designing rewards in multi-agent settings with adversaries or co-operative allies can be even more complicated. Experiment designers often have a goal in mind and then must reverse engineer a reward function that will likely lead to it. This process can be difficult, especially for non-experts, and is susceptible to reward hacking---unexpected and undesired behavior that achieves high reward but does not capture the essence of what the engineer was trying to achieve. Moreover, hand-designed reward functions may be brittle, as slight changes in the environment may yield large, and potentially unsafe, alterations in agent behavior.

The community has addressed these problems through many disparate approaches including reward shaping, intrinsic rewards, hierarchical reinforcement learning, curriculum learning, and transfer learning. Another approach is to avoid designing scalar rewards altogether, and rather focus on designing goals, for example, through inverse reinforcement learning, imitation learning, target images, or multimodal channels such as speech and text.

This workshop will consider all topics related to designing goals for reinforcement learning and problems that can arise from ill-defined goals. The submissions can include novel research, open problems in the field, and surveys. We are particularly interested in the topics of reward engineering, reward hacking, interpretability, learning from humans and goal design using multimodal input.


==================
AREAS OF INTEREST
==================
Problems with reward design
- Robust reward functions
- Reward hacking
- Adversarial attacks on RL agents
- Generalizability of reward functions
- Communicating learned goals to humans

Methods of reward design
- Reward engineering
- Reward shaping
- Intrinsic rewards

Methods of learning rewards
- Inverse Reinforcement Learning
- Interactive learning
- Supervised learning
- Evolutionary approaches

Methods of goal design using:
- Target images
- Imitation learning
- Transfer learning
- Curriculum learning
- Hierarchical RL
- Multimodal input (speech, text, sketches, etc.)
- Multi-agent cooperative/competitive learning
- Application-related issues and solutions

==================
SUBMISSION
==================
Submissions will be double-blind and are limited to 4 pages for short papers and 8 pages for full papers, not including references and appendices. Formatting should be in ICML style. Concurrent submissions are allowed, but works that have been accepted at archival venues are discouraged.

Submission link: https://easychair.org/conferences/?conf=goalsrl2018

==================
ORGANIZERS
==================
Ashley Edwards, Georgia Institute of Technology
Himanshu Sahni, Georgia Institute of Technology
Kaushik Subramanian, Cogitai
Charles Isbell, Georgia Institute of Technology
Michael Littman, Brown University

==================
CONTACT
==================
Please address questions to: goalsrl2018@easychair.org

Related Resources

MALTA 2025   AAAI-2025 Workshop: Multi-Agent Reinforcement Learning for Transportation Autonomy
IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
ICSTTE 2025   2025 3rd International Conference on SmartRail, Traffic and Transportation Engineering (ICSTTE 2025)
CVAI 2026   2026 International Symposium on Computer Vision and Artificial Intelligence (CVAI 2026)
IEEE-EI/Scopus-IECA 2025   2025 2nd International Conference on Informatics Education and Computer Technology Applications -IEEE Xplore/EI/Scopus
CRET--EI 2025   2025 International Conference on Control, Robotics Engineering and Technology (CRET 2025)
ICPRS 2025   15th International Conference on Pattern Recognition Systems
EduTeach 2025   9th Canadian Conference on Advances in Education, Teaching & Technology 2025
ACDL 2025   8th Advanced Course on Data Science & Machine Learning