posted by organizer: albertberahas || 4157 views || tracked by 6 users: [display]

NeurIPS WS: Optimization for ML 2019 : NeurIPS 2019 Workshop: Beyond First Order Methods in Machine Learning

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/site/optneurips19/
 
When Dec 13, 2019 - Dec 14, 2019
Where Vancouver, CANADA
Submission Deadline Sep 20, 2019
Notification Due Sep 30, 2019
Final Version Due Oct 31, 2019
Categories    machine learning   optimization   higher-order methods
 

Call For Papers

Optimization lies at the heart of many exciting developments in machine learning, statistics and signal processing. As models become more complex and datasets get larger, finding efficient, reliable and provable methods is one of the primary goals in these fields. 

In the last few decades, much effort has been devoted to the development of first-order methods. These methods enjoy a low per-iteration cost and have optimal complexity, are easy to implement, and have proven to be effective for most machine learning applications. First-order methods, however, have significant limitations: (1) they require fine hyper-parameter tuning, (2) they do not incorporate curvature information, and thus are sensitive to ill-conditioning, and (3) they are often unable to fully exploit the power of distributed computing architectures. 

Higher-order methods, such as Newton, quasi-Newton and adaptive gradient descent methods, are extensively used in many scientific and engineering domains. At least in theory, these methods possess several nice features: they exploit local curvature information to mitigate the effects of ill-conditioning, they avoid or diminish the need for hyper-parameter tuning, and they have enough concurrency to take advantage of distributed computing environments. Researchers have even developed stochastic versions of higher-order methods, that feature speed and scalability by incorporating curvature information in an economical and judicious manner. However, often higher-order methods are “undervalued.”

This workshop will attempt to shed light on this statement. Topics of interest include --but are not limited to-- second-order methods, adaptive gradient descent methods, regularization techniques, as well as techniques based on higher-order derivatives. This workshop can bring machine learning and optimization researchers closer, in order to facilitate a discussion with regards to underlying questions such as the following:
- Why are they not omnipresent?
- Why are higher-order methods important in machine learning, and what advantages can they offer?
- What are their limitations and disadvantages?
- How should (or could) they be implemented in practice?

Speakers:
- Coralia Cartis (Oxford University)
- Don Goldfarb (Columbia University)
- Elad Hazan (Princeton University)
- James Martens (DeepMind)
- Katya Scheinberg (Cornell University)
- Stephen Wright (UW - Madison)

Organizers:
- Albert S. Berahas (Lehigh University)
- Anastasios Kyrillidis (Rice University)
- Michael W Mahoney (Berkeley University)
- Fred Roosta (University of Queensland)

CALL FOR PAPERS
We welcome submissions to the workshop under the general theme of “Beyond First-Order Optimization Methods in Machine Learning”. Topics of interest include, but are not limited to,
- Second-order methods
- Quasi-Newton methods
- Derivative-free methods
- Distributed methods beyond first-order
- Online methods beyond first-order
- Applications of methods beyond first-order to diverse applications (e.g., training deep neural networks, natural language processing, dictionary learning, etc)

We encourage submissions that are theoretical, empirical or both.

Submissions:
Submissions should be up to 4 pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The CMT-based review process will be double-blind to avoid potential conflicts of interests; submit at https://cmt3.research.microsoft.com/OPTNeurIPS2019/.

Accepted submissions will be presented as posters.

Important Dates:
Submission deadline: September 20, 2019 (23:59 ET)
Acceptance notification: September 30, 2019
Final version due: October 31, 2019

Selection Criteria:
All submissions will be peer reviewed by the workshop’s program committee. Submissions will be evaluated on technical merit, empirical evaluation, and compatibility with the workshop focus.

Related Resources

NeurIPS 2024   The Thirty-Eighth Annual Conference on Neural Information Processing Systems
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
NeurIPS 2025   Annual Conference on Neural Information Processing Systems
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
MusiML 2024   Muslims in ML Workshop co-located with NeurIPS 2024
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
FL@FM-NeurIPS 2024   International Workshop on Federated Foundation Models in Conjunction with NeurIPS 2024
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
GenAI4Health@NeurIPS 2024   GenAI for Health: Potential, Trust and Policy Compliance (NeurIPS 2024 workshop)
ENLSP 2025   The 4th NeurIPS ENLSP 2024 workshop on Efficient Natural Language & Speech Processing: Highlighting New Architectures for Future Foundation Models