| |||||||||||||||||
HEART 2025 : International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies | |||||||||||||||||
Link: http://www.isheart.org/ | |||||||||||||||||
| |||||||||||||||||
Call For Papers | |||||||||||||||||
International Symposium on Highly Efficient Accelerators
and Reconfigurable Technologies (HEART) May 26-28, 2025, Kumamoto City, Japan Submission due: Feb 17, 2025 https://sites.google.com/view/heart2025 The International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART) is a forum to present and discuss new research on computing systems utilizing acceleration technology. The main theme of HEART is achieving high efficiency with accelerators. Today, performance acceleration with high efficiency is highly demanded in various computing domains such as high-performance computing and data centers. In HEART 2025, we focus on power, energy and algorithmic efficiency on the AI technology such as LLM (large-language model) and beyond. We also focus on technologies related to quantum computing. [ Topics of Interest ] - Architectures for Efficient Acceleration - Novel systems/platforms based on FPGA, GPU, and other devices - Heterogeneous processor architectures and systems for high-performance and/or low-power - Domain-specific architectures - Accelerator and system-level architecture for LLM - Edge technologies for LLM - Neuromorphic computing and architecture - Fault-Tolerant Quantum Computer (FTQC) architecture, system, and simulations - Novel systems for quantum computing: qubit simulation, operation and error correction - Design Methods and Tools for Efficient Acceleration - Programming paradigms, languages, and frameworks - Compilers and High-level synthesis tools - Runtime methodologies for heterogeneous systems - Performance evaluation and analysis - LLM-based design method - AI-supported code generation / debugging - Applications and Systems - Application examples that benefit from efficient acceleration to a great extent - Complete systems showing increased energy efficiency and/or performance - Comparisons between accelerator technologies - AI for science with higher productivity - Efficient models and algorithms for LLM - Applications with efficient LLM/Neuromorphic accelerators - Benchmarks for ML/LLM [ Important Dates ] Submission due: Feb. 17, 2025 Notification: Mar. 10, 2025 Conference Date: May. 26-28, 2025 [ Organizing Committee ] General chair Yasunori Osana (Kumamoto U.) Vice co-chairs Kentaro Sano (RIKEN) Yoshiki Yamaguchi (U. Tsukuba) Program co-chairs Tomohiro Ueno (RIKEN) Ameer Abdelhadi (McMaster U.) Dirk Koch (Heidelberg U.) Publication chair Yukinori Sato (Toyohashi U. of Tech) Workshop co-chairs Akram Ben Ahmed (AIST JP) Takefumi Miyoshi (QuEL, Inc. & e-trees Japan, Inc.) Financial Chair Takeshi Ohkawa (Kumamoto U.) |
|