posted by user: meryt75 || 1692 views || tracked by 4 users: [display]

AITA 2023 : AI Trustworthiness Assessment

FacebookTwitterLinkedInGoogle

Link: https://aita.sciencesconf.org/
 
When Mar 27, 2023 - Mar 29, 2023
Where Palo Alto, CA - USA
Submission Deadline Jan 22, 2023
Notification Due Jan 31, 2023
Final Version Due Feb 10, 2023
Categories    artificial intelligence   evaluation   trustworthiness
 

Call For Papers

The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering "Trust" as a design principle rather than an option. Moreover, the design of AI-based critical systems such as in avionics, mobility, defense, healthcare, finance, critical infrastructures, ... requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (regulators, developers, customers, reinsurance companies, end-users) for different reasons. We can call it AI validation, monitoring, assessing, or auditing, but the fundamental concept in all cases is to make sure the AI is performing well within its operational design domain. Such assessment begins from the early stages of development, including the definition of the specification requirements for the system, the analysis, the design, etc. Trust and trustworthiness assessment have to be considered at every phase of the system lifecycle, including sale and deployment, updates, maintenance or int. It is expected that full trustworthiness in AI systems can only be established if the technical measures to establish trustworthiness are flanked by specifications for the governance and processes of organizations that use and develop AI. Application of Social Sciences and Humanities (SSH) methods and principles to handle human AI interaction, and aid in the operationalisation of (ethical) values in the design and assessment, with important information provided on their actual impact on trust and trustworthiness is a key issue.

Thus, AI researchers and engineers are confronted with different levels of safety and security, different horizontal and vertical regulations, different (ethical) standards (including fairness, privacy), different homologation/certification processes, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. In addition, they are struggling with values that need to be translated into concrete standards that can be used in assessment. Collaboration with SSH researchers to specify these standards is a central challenge to make sure that assessments also cover the normative/ethical aspects of trustworthiness.

To judge AI-based systems merely by the accuracy percentage is a highly misleading metric. In addition, conventional methods for testing and validating software fall short and it is even difficult to measure test coverage in principle. Due to the multi-dimensional nature of trust and trustworthiness, one of the main issues we face is to establish objective attributes such as accountability, accuracy, controllability, correctness, data quality, reliability, resilience, robustness, safety, security, transparency, explainability, fairness, privacy etc, map them onto the AI processes and its lifecycle and provide methods and tools to assess them. Thus, this shines a light on quality requirements (“-ilities”, or non-functional requirements) which appear particularly challenging in an AI system, although many of them can be considered in any critical system. Furthermore, beyond quality requirements, this can also encompass risk and process considerations. The expected attributes and the expected values for these attributes depend on contextual elements such as the level of criticality of the application, the application domain of the AI-based system, the expected use, the nature of the stakeholders involved, etc. This means that in some contexts, certain attributes will prevail, and other attributes may be added to the list. Clear specifications of the non-functional requirements will help clarify these conflicts and can also spur innovation that solves some of these conflicts, allowing us to fulfill more of them at the same time.

The goal of this symposium is to establish and grow a community of research and practitioners for AI trustworthiness assessment leveraged by AI sciences, system and software engineering, metrology, and SSH (Social Sciences and Humanities). This symposium aims to explore innovative approaches, metrics and/or methods proposed by academia or industry, to "assess the trust and trustworthiness" of AI-based critical systems with a particular focus on (but not limited to) the following questions:

- How can we qualify datasets according to the expected trustworthy requirements of the resulting AI based critical system?
- How to define appropriate quantitative performance indicators and generating test examples to feed into the AI (e.g. corner cases, synthetic data)?
- How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
- How can non-functional requirements such as accountability and controllability be evaluated (quantitatively)?
- How could interpretability and explainability algorithms be evaluated from both technical and end-user perspectives?
- How do metrics of capability and generality, and the trade-offs with performance affect trust and/or trustworthiness?
- How can we define suitable processes and governance mechanisms in organizations that develop and deploy AI-systems?
- How can we leverage pilot assessments to develop systematic evaluation techniques for AI-trustworthiness?



Related Resources

ATRACC 2024   AAAI Fall Symposium: AI Trustworthiness and Risk Assessment for Challenged Contexts
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
AITA 2024   2nd International Conference on Artificial Intelligence: Theory and Applications
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
FPC 2025   Foresight Practitioner Conference 2025
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
AI in Evidence Synthesis 2025   AI in Evidence Synthesis (Cochrane Evidence Synthesis and Methods)
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
IJCNN 2025   International Joint Conference on Neural Networks