posted by user: LESHEM || 2005 views || tracked by 3 users: [display]

babyLM CHALLENGE 2023 : CfP babyLM - shared task hosted in CoNLL/CMCL 2023

FacebookTwitterLinkedInGoogle

Link: https://babylm.github.io/
 
When Jan 1, 2023 - Sep 1, 2023
Where CMCL+CONLL
Submission Deadline TBD
Categories    machine learning   natural language processin   computational linguistics   pretraining
 

Call For Papers

Announcing the BabyLM Challenge, the shared task at CoNLL/CMCL 2023!


The goal of this shared task is to encourage researchers with an interest in pretraining and/or cognitive modeling to focus their efforts on optimizing pretraining given data limitations inspired by human development. Additionally, we hope to democratize research on pretraining—which is typically thought to be practical only for large industry groups—by formulating an exciting open problem and establishing a community around it.


A huge effort has been put towards optimizing LM pretraining at massive scales in the last several years. While increasingly larger models often get the most attention, datasets have also grown by orders of magnitude. For example, Chinchilla is exposed to 1.4 trillion words during training—well over 10000 words for every one word a 13-year-old human has encountered in their entire life.


Focusing on scaled-down pretraining has several potential benefits: First, small-scale pretraining can be a sandbox for developing novel techniques for improving data efficiency. These techniques have the potential to then scale up to larger scales commonly seen in applied NLP or used to enhance current approaches to modeling low-resource languages. Second, improving our ability to train LMs on the same kinds and quantities of data that humans learn from hopefully will give us greater access to plausible cognitive models of humans and help us understand what allows humans to acquire language so efficiently.


The task has three tracks, two of which restrict the training data to pre-released datasets of 10M and 100M words and are dedicated to explorations of approaches such as architectural variations, self-supervised objectives, and/or curriculum learning. The final track only restricts the amount of text used, allowing innovation in the choice of the data, its domain, and even its modality (i.e., data from sources other than text is welcome). We will release a shared evaluation pipeline that evaluates on a variety of benchmarks and tasks, including targeted syntactic evaluations and natural language understanding.


Important dates:

January 2023: Training data released (see website for download)

March 2023: Evaluation pipeline released

July 15, 2023: Results due

August 1, 2023: Paper submissions due

Date TBA: Presentation at CoNLL


For more information, visit the BabyLM website https://babylm.github.io/ or consult our extended call for papers.

Related Resources

COLING 2025   [2nd CFP] The 1st Workshop and Shared Task on Multilingual Counterspeech Generation
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
PerAnsSumm Shared Task @ CL4Health NAACL 2025   Shared Task on Perspective-aware Healthcare Answer Summarization at CL4Health Workshop [NAACL 2025]
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
Abu Dhabi, UAE 2025   The First Workshop and Shared Task on Multilingual Counterspeech Generation
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
GEM shared task 2024   GEM 2024 multilingual data-to-text and summarization shared task
AEIJ 2024   Advanced Energy: An International Journal