|
| |||||||||||||
CoLLAs 2026 : 5th Conference on Lifelong Learning Agents | |||||||||||||
| Link: https://lifelong-ml.cc/Conferences/2026 | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
|
The Conference on Lifelong Learning Agents (CoLLAs) is an annual gathering of researchers to exchange ideas on all facets of adaptation of ML models, especially, but not limited to, the ability for lifelong learning. This adaptation could be during continuous training of a model or after it has already been trained, for the purpose of incorporating new data, “tasks”, or “capabilities”, and / or removing (the effect of) outdated, harmful or unwanted data, “tasks”, or “capabilities”, effectively and efficiently. This need for adaptation may naturally be sequential, rather than a one-off, but this need not be the case.
Some example research problems within the scope are The conception, design, and implementation of memory within machine learning. Algorithms that enable effective continuous updates without forgetting old knowledge or while enabling fast recovery of important forgotten knowledge when needed. Improving or understanding learning and optimization of machine learning models, particularly in, but not limited to, non-stationary settings Post-processing or altering (using gradients or other inference-time procedures) a model for efficient and effective adaptation. Post-processing a model to remove or edit the effect of specific data points or knowledge (for example for protecting user privacy, or for making models safer or even simply more accurate.) Architectures and / or pre-training procedures that will facilitate later adaptation or increase efficiency in learning as they see more data, e.g. modular or compositional architectures. Adapting through feedback or through interaction with a (possibly multi-turn) human/agent in the loop In terms of terminology, the above investigations may fall under the areas of continual and life-long learning, unlearning, meta-learning, few-shot learning, reinforcement learning, model editing, safety alignment, open-ended and open world learning, in-context learning, test-time adaption and active self-improvement among many others. Note that this list is non-exhaustive. We welcome both theoretical and empirical contributions, across a wide range of problem settings, from simple models to LLMs. Although not restricted to the below, the typical types of CoLLAs papers include a combination of: Empirical contributions Theory and theoretical analyses Applications that relate to and / or require any of the above areas Datasets and benchmarks that facilitate studies in the above areas. Studies of failure modes and critical analyses of methods designed to tackle any of the above issues. Approaches that draw inspiration from neuroscience, psychology or biological systems Submitted papers will be evaluated based on their novelty, technical quality, and potential impact. Experimental methods and results are expected to be reproducible, and authors are strongly encouraged to make code and data available. We also encourage submissions of proof-of-concept research that puts forward novel ideas and demonstrates potential, as well as in-depth analysis of existing methods and concepts. Non-archival track. In addition to the main track, CoLLAs also has a non-archival track for work-in-progress (workshop track) or work that was recently published in another archival venue (sister venue track). The non-archival track will follow a light-weight review process and will have a separate call for papers with its own timeline, different from the main-track submissions. Given the light-weight review process, non-archival track submission will be reviewed on a rolling basis. CoLLAs 2026 will be held at the National University of Science and Technology POLITEHNICA Bucharest, Romania. |
|