| |||||||||||||||
FLLM 2024 : The 2nd International Conference on Foundation and Large Language Models | |||||||||||||||
Link: https://fllm-conference.org/2024/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
With the emergence of foundation models (FMs) and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.
The International Conference on Foundation and Large Language Models (FLLM) addresses the architectures, applications, challenges, approaches, and future directions. We invite the submission of original papers on all topics related to FLLMs, with special interest in but not limited to: -Architectures and Systems +Transformers and Attention +Bidirectional Encoding +Autoregressive Models +Massive GPU Systems +Prompt Engineering +Fine-tuning -Challenges +Hallucination +Cost of Creation and Trining +Energy and Sustainability Issues +Integration +Safety and Trustworthiness +Interpretability +Fairness +Social Impact -Future Directions +Generative AI +Explanability +Federated Learning for FLLM +Data Augmentation -Natural Language Processing Applications +Generation +Summarization +Rewrite +Search +Question Answering +Language Comprehension and Complex Reasoning +Clustering and Classification -Applications +Natural Language Processing +Communication Systems +Security and Privacy +Image Processing and Computer Vision +Life Sciences +Financial Systems IMPORTANT DATES Notification of Acceptance: 1 September 2024 Camera-ready Submission: 10 October 2024 ORGANIZATION COMMITTEE General Chairs Christian Guetl, Graz University of Technology, Graz, Austria Jim Jansen. Qatar Computing Research Institute, HBKU, Qatar Technical Program Chairs Anastasija Nikiforova, Faculty of Computing, University of Latvia, Latvia Yaser Jararweh, JUST, Jordan Wenbo Zhu, University of Chicago, USA Moayad Aloqaily, MBZUAI, UAE |
|