posted by user: alkhababs || 49 views || tracked by 1 users: [display]

NGEN-AI 2026 : The 2026 International Conference on Next Generation AI Systems

FacebookTwitterLinkedInGoogle

Link: https://ngen-ai.org/
 
When Sep 1, 2026 - Sep 4, 2026
Where Trento, Italy
Submission Deadline May 25, 2026
Notification Due Jul 15, 2026
Final Version Due Aug 10, 2026
Categories    artificial intelligence   generative ai   language models   machine learning
 

Call For Papers

CFP: The 2026 International Conference on Next-Generation AI Systems

https://ngen-ai.org/

Venue: Trento, Italy (1-4 September 2026)

Scope
We invite high-quality, original contributions that advance the theory, engineering, and real-world impact of Next Generation AI Systems—spanning federated and distributed intelligence; small, large, and generative models; agentic and interactive AI; deep learning and representation learning; explainability and transparency; trustworthy, responsible, and sustainable AI; MLOps and lifecycle management; AI systems and infrastructures; and application-driven research with societal impact.

NGEN-AI 2026 brings together researchers, practitioners, and industry leaders working on the next wave of artificial intelligence. The conference provides a platform for interdisciplinary collaboration, bridging theoretical foundations and practical implementations in intelligent, trustworthy, and sustainable AI systems deployed across diverse domains and real-world environments.

General Chairs
- Marco Roveri, University of Trento, Italy
- Sadi Alawadi, Blekinge Institute of Technology, Sweden


Important Dates
Paper Submission Deadline: May 25, 2026
Notification of Acceptance July 15, 2026
Camera-ready Submission August 10, 2026
All deadlines are in Anywhere on Earth (AoE) time.


Topics of Interest
The NGEN-AI conference welcomes research, experience, and vision papers that explore
foundational methods, systems, and applications of next generation AI. Topics of interest for each track include, but are not limited to, the following.
1- Federated Learning
The Federated Learning track focuses on learning paradigms that enable models to be trained collaboratively across devices and organizations without centralizing raw data. We are particularly interested in approaches that push computation closer to the edge while preserving privacy, robustness, and scalability in realistic deployments.
Contributions may address algorithms for heterogeneous and non-IID data, personalization
strategies, communication-efficient protocols, secure aggregation and differential privacy, resourceaware federated learning on constrained hardware, as well as demonstrators and case studies in domains such as healthcare, finance, smart cities, and industrial IoT.
• Architectures for cross-device and cross-silo federated learning.
• Federated optimization under non-IID, sparse, or unbalanced data distributions.
• Personalized and on-device adaptation strategies in federated settings.
• Communication-efficient FL (compression, sparsification, update scheduling).
• Privacy-preserving FL: secure aggregation, differential privacy, homomorphic encryption.
• Robustness to poisoning, backdoor, and Byzantine attacks in federated scenarios.
• Energy- and resource-aware FL on mobile, edge, and IoT devices.
• Federated learning in vertical, horizontal, and hybrid data partitioning settings.
• Federated analytics and federated evaluation techniques.
• MLOps for FL: lifecycle management, monitoring, and deployment at scale.
• Benchmarking, simulators, datasets, and reproducibility studies for FL.
• Real-world applications in healthcare, finance, smart industry, and smart cities.
• Regulatory, ethical, and governance aspects of federated and collaborative learning.


2- Small & Large Language Models and Generative AI
This track targets advances in small and large language models (SLMs and LLMs) and other
generative AI models that synthesize and reason over text, code, images, audio, and multimodal data. We welcome work on model architectures, training strategies, and deployment techniques that improve performance, controllability, safety, and efficiency of foundation and generative models in real-world settings.
Relevant contributions span from core modelling and optimization to evaluation, alignment, and application-driven studies, including systems that tightly integrate language and generative models with external tools, data sources, and complex software architectures.
• Architectures and training recipes for SLMs, LLMs, and foundation models.
• Pre-training, instruction-tuning, alignment (e.g., RLHF, DPO, preference optimization).
• Domain-specific and compact SLMs for on-device and resource-constrained settings.
• Prompt engineering, in-context learning, function calling, and tool-augmented pipelines.
• Retrieval-augmented generation and knowledge-grounded generative models.
• Generative models for text, code, images, audio, video, and multimodal content.
• Model compression, distillation, quantization, and sparsity for efficient deployment.
• Edge and on-device deployment of SLMs/LLMs and generative models.
• Safety, robustness, and red-teaming of generative systems (toxicity, hallucinations, bias).
• Evaluation methodologies, benchmarks, and human-in-the-loop assessment.
• Generative AI for scientific discovery, simulation, and data augmentation.
• Software engineering with LLMs: code generation, refactoring, testing, and verification.
• Governance, transparency, IP, and regulatory aspects of foundation and generative models.

3- Deep Learning Architectures & Representation Learning
This track focuses on advances in deep learning architectures and representation learning methods that underpin next generation AI systems. We invite contributions that improve expressiveness, robustness, interpretability, and efficiency across supervised, unsupervised, and self-supervised learning paradigms. We particularly encourage work that bridges novel architectures with deployment constraints, addresses data scarcity and bias, or opens up new application domains.
• Novel neural architectures (transformers, graph neural networks, diffusion models, etc.).
• Self-supervised, contrastive, and representation learning at scale.
• Multimodal learning and fusion of heterogeneous data sources.
• Curriculum learning, meta-learning, and continual / lifelong learning.
• Robust and certified deep learning under distribution shift and adversarial attacks.
• Interpretable and explainable deep learning methods.
• Data-centric AI: dataset curation, quality, and augmentation strategies.
• Efficient training and inference: pruning, low-rank adaptation, and sparse models.
• Neural architecture search and automated model design.
• Applications of deep learning in vision, language, time series, recommender systems, and beyond.

4- Agentic AI
This track concentrates on agentic AI systems that perceive, reason, plan, and act over extended time horizons-often in dynamic environments and in collaboration with humans or other agents. We are interested in both theoretical foundations and practical deployments of autonomous and semiautonomous agents in digital and physical settings.
We particularly encourage submissions that connect planning and decision making with learning, perception, and interaction, and that critically examine the reliability, safety, and societal impact of agentic AI.
• Architectures for autonomous, semi-autonomous, and mixed-initiative agents.
• Planning, reasoning, and long-horizon decision making for agentic systems.
• Reinforcement learning, hierarchical RL, and model-based control for agents.
• LLM-driven agents, tool-using agents, and workflow / task orchestration.
• Multi-agent systems: coordination, negotiation, communication, and cooperation.
• Human-agent interaction, explainability, and trust in agentic AI systems.
• Safety, verification, alignment, and oversight for autonomous agents.
• Simulation environments, digital twins, and benchmarks for agentic AI.
• Agents in robotics, autonomous vehicles, logistics, smart grids, and IoT environments.
• Social, economic, and ethical implications of pervasive agentic AI.
• Engineering methodologies, software frameworks, and tooling for large-scale agent systems.
• Hybrid symbolic-subsymbolic approaches for reasoning and acting.
MLOps, AI Engineering & Lifecycle Management
This track addresses the engineering and operational aspects of building, deploying, and maintaining

5- AI systems in production.
It covers the full lifecycle from data and model pipelines to monitoring,
governance, and socio-technical considerations in organizations.
We invite contributions that connect software engineering, DevOps, and platform engineering practices with the unique requirements of machine learning and foundation models.
• MLOps platforms and infrastructure for scalable training and deployment.
• CI/CD for ML, continuous training, and continuous evaluation.
• Data and feature management: data versioning, feature stores, and lineage tracking.
• Monitoring, observability, and incident response for AI systems.
• Model governance, risk management, and compliance (e.g., AI Act, sectoral regulation).
• Testing, debugging, and quality assurance for ML components and pipelines.
• Infrastructure for serving LLMs and generative models at scale.
• Cost- and energy-aware deployment and scheduling of AI workloads.
• Organizational processes and roles for AI/ML teams.
• Case studies and lessons learned from real-world AI production deployments.

6- Explainable AI (XAI) & Transparency
This track focuses on methods and practices that make next generation AI systems understandable, transparent, and auditable for humans. We welcome contributions that improve how AI systems explain their decisions and behaviors, enabling trust, accountability, and effective human oversight
in real-world deployments. We encourage work spanning intrinsic interpretability and post-hoc explanations, explanation quality evaluation, and human-centered design of explanations, including applications to foundation
models, agentic systems, and distributed AI settings.
• Post-hoc explanations (e.g., feature attribution, saliency, local surrogate models).
• Intrinsic interpretability and transparent model design.
• Counterfactual and contrastive explanations.
• Uncertainty estimation, calibration, and communicating confidence to users.
• Explainability for LLMs and generative AI (faithfulness, grounding, rationale analysis).
• Explainability in federated, privacy-preserving, and edge AI settings.
• Explainable decision making for agentic and multi-agent systems.
• Human-centered explanation design, usability, and user studies.
• Evaluation and benchmarking of explanations (faithfulness, robustness, usefulness).
• Auditing, debugging, and root-cause analysis for AI systems.
• Transparency documentation (e.g., model cards, datasheets) and reporting standards.
• Regulatory, ethical, and governance aspects related to transparency and explainability.

7- Trustworthy, Responsible & Sustainable AI
This track focuses on the qualities that enable next generation AI systems to be adopted and relied upon in society: trustworthiness, responsibility, and sustainability. We welcome contributions that align AI systems with human values and public expectations, ensuring they are safe, fair, transparent, and robust across real-world contexts.
We particularly encourage work that connects technical advances with governance and sociotechnical practices, including evaluation methodologies and lifecycle approaches that reduce risk and environmental impact while improving accountability.
• Trustworthiness by design: safety, reliability, and robustness under distribution shift.
• Fairness, bias mitigation, and inclusive AI across populations and contexts.
• Accountability, transparency, and auditability in AI systems.
• Human values and alignment: human-centered objectives, oversight, and control.
• Responsible AI governance: policies, risk management, and compliance practices.
• Privacy, security, and protection against adversarial and data poisoning attacks.
• Evaluation frameworks, metrics, and benchmarks for trustworthy and responsible AI.
• Monitoring and lifecycle management for responsible AI in production.
• Sustainable AI: energy-efficient training/inference, green AI, and carbon-aware operation.
• Responsible data practices: provenance, consent, documentation, and data stewardship.
• Socio-technical studies of AI adoption, impact, and organizational readiness.
• Case studies and lessons learned from responsible and sustainable AI deployments.

8- AI Systems, Hardware & Edge/Cloud Infrastructures
This track focuses on systems, architectures, and hardware platforms that enable efficient and sustainable execution of next generation AI workloads. Submissions may address full-stack co-design from algorithms and compilers down to accelerators and distributed infrastructures. We particularly welcome work that bridges AI models with constraints of real-world platforms, including edge devices, heterogeneous clusters, and specialized hardware.
• Distributed and parallel systems for large-scale training and inference.
• Scheduling and placement of AI workloads across edge, fog, and cloud.
• Hardware accelerators (GPUs, TPUs, NPUs, FPGAs) and co-design for AI.
• Systems support for LLMs and foundation models (sharding, offloading, caching).
• Energy-efficient and green AI computing, including carbon-aware orchestration.
• Runtime systems, compilers, and libraries for AI workloads.
• Edge AI and embedded AI for IoT, CPS, and real-time applications.
• Resilience, fault tolerance, and reliability of AI systems and infrastructures.
• Benchmarks, performance analysis, and optimization of AI systems.

9- Applications & Societal Impact of Next Generation AI
This track brings together application-driven research and critical perspectives on the impact of next generation AI systems on individuals, organizations, and society. We welcome studies that combine technical innovation with domain insights, as well as empirical analyses of adoption, impact, and governance.
Interdisciplinary work at the intersection of AI, human-computer interaction, social sciences, and policy is particularly encouraged.
• Next generation AI applications in healthcare, finance, education, mobility, and industry.
• AI for sustainability, climate, energy, and environmental monitoring.
• Human-AI collaboration, co-creation, and augmented decision making.
• Fairness, accountability, transparency, and ethics in AI systems.
• Regulation, standards, and governance frameworks for AI.
• Socio-technical analyses of AI deployment and organizational transformation.
• User studies, field deployments, and longitudinal evaluations.
• Public sector and civic applications of AI (e-government, public services, smart cities).
• Education, upskilling, and capacity building for AI-literate societies.


Submission Types
● Long Papers (16 pages): original research with clear methodology, results, and
contributions.
● Short Papers (8 pages): Short research contributions, focused studies, and demo or artifact papers.
● Poster Papers (6 pages): Concise presentations of work in progress and undergraduate
research.


We look forward to receiving your contributions and to welcoming you at NGEN-AI 2026 in Trento,
Italy!

Related Resources

Ei/Scopus-ITCC 2026   2026 6th International Conference on Information Technology and Cloud Computing (ITCC 2026)
Cyber-AI 2026   The 2nd IEEE 2026 International Conference on Cybersecurity and AI-Based Systems (Scopus)
AMLDS 2026   IEEE--2026 2nd International Conference on Advanced Machine Learning and Data Science
Rev-AI 2026   The 2026 International Conference on Revolutionary Artificial Intelligence and Future Applications
Ei/Scopus-CEICE 2026   2026 3rd International Conference on Electrical, Information and Communication Engineering (CEICE 2026)
AI Encyclopedia 2027   Call for Articles in Elsevier's new AI Encyclopedia
Ei/Scopus-CMLDS 2026   2026 3rd International Conference on Computing, Machine Learning and Data Science (CMLDS 2026)
AI in Social Sciences 2026   AI in Social Sciences (working title)
AI-SEC 2026   The 2nd International Workshop on Artificial Intelligence for Cybersecurity
ACM ICCAI 2026   ACM--2026 12th International Conference on Computing and Artificial Intelligence (ICCAI 2026)