posted by organizer: tnaeven || 212 views || tracked by 1 users: [display]

SAIA 2024 : Symposium on Scaling AI Assessments - Tools, Ecosystems and Business Models

FacebookTwitterLinkedInGoogle

Link: https://www.zertifizierte-ki.de/symposium-on-scaling-ai-assessments/
 
When Sep 30, 2024 - Oct 1, 2024
Where Cologne
Submission Deadline Jul 22, 2024
Notification Due Aug 19, 2024
Final Version Due Sep 9, 2024
Categories    artificial intelligence   AI   computer science   trustworthy ai
 

Call For Papers

This symposium aims to advance marketable AI assessments and audits for trustworthy AI. Specifically, papers and presentations both from an operationalization perspective (including governance and business perspectives) and from an ecosystem & tools perspective (covering approaches from computer science) are encouraged. Topics include but are not limited to:

Perspective: Operationalization of market-ready AI assessment
- Standardizing AI Assessments
- Risk and Vulnerability Evaluation
- Implementing Regulatory Requirements
- Business Models Based on AI Assessments

Perspective: Testing tools and implementation methods for trustworthy AI products
- Infrastructure and Automation
- Safeguarding and Assessment Methods
- Systematic Testing

Organization: Fraunhofer IAIS
Organization Committee contact: zki-symposium@iais.fraunhofer.de

For further information please visit the symposium website:
https://www.zertifizierte-ki.de/symposium-on-scaling-ai-assessments/

*Motivation*

Trustworthy AI is considered a key prerequisite for Artificial Intelligence (AI) applications. Especially against the background of European AI regulation, AI conformity assessment procedures are of particular importance, both for specific use cases and for general-purpose models. But also in non-regulated domains, the quality of AI systems is a decisive factor as unintended behavior can lead to serious financial and reputation damage. As a result, there is a great need for AI audits and assessments and in fact, it can also be observed that a corresponding market is forming. At the same time, there are still (technical and legal) challenges in conducting the required assessments and a lack of extensive practical experience in evaluating different AI systems. Overall, the emergence of the first marketable/commercial AI assessment offerings is just in the process and a definitive, distinct procedure for AI quality assurance has not yet been established.


1. AI assessments require further operationalization both at level of governance and related processes and at the system/product level. Empirical research is pending that tests/evaluates governance frameworks, assessment criteria, AI quality KPIs and methodologies in practice for different AI use cases.

2. Conducting AI assessments in practice requires a testing ecosystem and tool support, as many quality KPIs cannot be calculated without tool support. At the same time automation of such assessments is a prerequisite to make the corresponding business model scale.

Related Resources

Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
BPMDS 2024   Business Process Modeling, Development, and Support
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
ICTAI 2024   36th International Conference on Tools with Artificial Intelligence
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
ITNG 2024   The 21st Int'l Conf. on Information Technology: New Generations ITNG 2024
IEEE ICA 2022   The 6th IEEE International Conference on Agents
MEDES 2024   The 16th International Conference on Management of Digital EcoSystems
CCVPR 2024   2024 International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2024)
IEEE-JBHI (SI) 2024   Special Issue on Revolutionizing Healthcare Informatics with Generative AI: Innovations and Implications