posted by organizer: albertberahas || 3826 views || tracked by 3 users: [display]

Beyond First Order Methods in ML- ICML 2021 : ICML 2021 Workshop: Beyond First Order Methods in Machine Learning

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/optml-icml2021
 
When Jul 24, 2021 - Jul 24, 2021
Where (virtual)
Submission Deadline Jul 2, 2021
Notification Due Jul 9, 2021
Final Version Due Jul 19, 2021
Categories    optimization   machine learning   deep learning   higher-order methods
 

Call For Papers

Optimization lies at the heart of many exciting developments in machine learning, statistics and signal processing. As models become more complex and datasets get larger, finding efficient, reliable and provable methods is one of the primary goals in these fields.

In the last few decades, much effort has been devoted to the development of first-order methods. These methods enjoy a low per-iteration cost and have optimal complexity, are easy to implement, and have proven to be effective for most machine learning applications. First-order methods, however, have significant limitations: (1) they require fine hyper-parameter tuning, (2) they do not incorporate curvature information, and thus are sensitive to ill-conditioning, and (3) they are often unable to fully exploit the power of distributed computing architectures.

Higher-order methods, such as Newton, quasi-Newton and adaptive gradient descent methods, are extensively used in many scientific and engineering domains. At least in theory, these methods possess several nice features: they exploit local curvature information to mitigate the effects of ill-conditioning, they avoid or diminish the need for hyper-parameter tuning, and they have enough concurrency to take advantage of distributed computing environments. Researchers have even developed stochastic versions of higher-order methods, that feature speed and scalability by incorporating curvature information in an economical and judicious manner. However, often higher-order methods are “undervalued.”

This workshop will attempt to shed light on this statement. Topics of interest include --but are not limited to-- second-order methods, adaptive gradient descent methods, regularization techniques, as well as techniques based on higher-order derivatives. This workshop can bring machine learning and optimization researchers closer, in order to facilitate a discussion with regards to underlying questions such as the following:
- Why are they not omnipresent?
- Why are higher-order methods important in machine learning, and what advantages can they offer?
- What are their limitations and disadvantages?
- How should (or could) they be implemented in practice?

Plenary Speakers:
- Stefania Bellavia (University of Florence)
- Frank E. Curtis (Lehigh University)
- Jelena Diakonikolas (UW-Madison)
- Quanquan Gu (UCLA)
- Clément Royer (Université Paris-Dauphine)
- Courtney Paquette (McGill University)
- Ashia C. Wilson (MIT)


****CALL FOR PAPERS****
We welcome submissions to the workshop under the general theme of “Beyond First-Order Optimization Methods in Machine Learning”. Topics of interest include, but are not limited to,
- Second-order methods
- Quasi-Newton methods
- Derivative-free methods
- Distributed methods beyond first-order
- Online methods beyond first-order
- Applications of methods beyond first-order to diverse applications (e.g., training deep neural networks, natural language processing, dictionary learning, etc)

We encourage submissions that are theoretical, empirical or both.

Submissions:
Submissions should be up to 4 pages excluding references, acknowledgements, and supplementary material, and should follow ICML format. The CMT-based review process will be double-blind to avoid potential conflicts of interests. Please see https://sites.google.com/view/optml-icml2021/cfp?authuser=0 for more details. CMT submission link: https://cmt3.research.microsoft.com/OPTICML2021.

Important Dates:
Submission deadline: July 02, 2021 (23:59 ET)
Acceptance notification: July 09, 2021
Final version due: July 19, 2021

Selection Criteria:
All submissions will be peer reviewed by the workshop’s program committee. Submissions will be evaluated on technical merit, empirical evaluation, and compatibility with the workshop focus.


Organizers:
- Albert S. Berahas (University of Michigan)
- Raghu Bollapragada (UT Austin)
- Rixon Crane (University of Queensland)
- Amir Gholami (Berkeley University)
- J. Lyle Kim (Rice University)
- Anastasios Kyrillidis (Rice University)
- Michael W Mahoney (Berkeley University)
- Fred Roosta (University of Queensland)
- Rachael Tappenden (University of Canterbury)

Related Resources

ICML 2024   International Conference on Machine Learning
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
ICML XX 2025   XX International Conference on Minority Languages / XX Congreso Internacional de Idiomas Minoritarios
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
llmsbeyondthecutoff 2024   LLMs Beyond the Cutoff: 1st International Workshop on Computational Methods Beyond the Temporal Borders of Training Data
21st AIAI 2025   21st (AIAI) Artificial Intelligence Applications and Innovations
NLE Special Issue 2024   Natural Language Engineering- Special issue on NLP Approaches for Computational Analysis of Social Media Texts for Online Well-being and Social Order
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
MHF-ICML 2024   ICML Workshop Models of Human Feedback for AI Alignment
SNAM-Special Issue 2024   Datasets, Language Resources and Algorithmic Approaches on Online Wellbeing and Social Order in Asian Languages