| |||||||||||||
MHF-ICML 2024 : ICML Workshop Models of Human Feedback for AI Alignment | |||||||||||||
Link: https://sites.google.com/view/mhf-icml2024/home | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
Hello everyone,
We are pleased to announce the Models of Human Feedback for AI Alignment Workshop at ICML 2024 taking place July 26 in Vienna, Austria. The workshop will discuss crucial questions for AI alignment and learning from human feedback including how to model human feedback, how to learn from diverse human feedback, and how to ensure alignment despite misspecified human models. Call for Papers: https://sites.google.com/view/mhf-icml2024/call-for-papers Submission Portal: https://openreview.net/group?id=ICML.cc/2024/Workshop/MFHAIA Key dates: Submission deadline: May 31st AOE Acceptance notification: June 17th Workshop: July 26th We invite submissions related to the theme of the workshop. Topics include but are not limited to: Learning from Demonstrations (Inverse Reinforcement Learning, Imitation Learning, ...) Reinforcement Learning with Human Feedback (Fine-tuning LLMs, ...) Human-AI Alignment, AI Safety, Cooperative AI Robotics (Human-AI Collaboration, ...) Preference Learning, Learning to Rank (Recommendation Systems, ...) Computational Social Choice (Preference Aggregation, ...) Operations Research (Assortment Selection, ...) Behavioral Economics (Bounded Rationality, ...) |
|