posted by organizer: ProfBina || 676 views || tracked by 2 users: [display]

MEandE-LP 2023 : 3rd International Workshop on Machine Ethics and Explainability - The Role of Logic Programming

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/meande-lp2023/
 
When Jul 9, 2023 - Jul 10, 2023
Where Imperial College London, UK
Submission Deadline TBD
Categories    artificial intelligence   machine ethics   explainability   logic programming
 

Call For Papers

=======================================================================

CALL FOR PAPERS

MEandE-LP 2023

3rd Workshop on Machine Ethics and Explainability - The Role of Logic Programming

https://sites.google.com/view/meande-lp2023

Affiliated with the 39th International Conference on Logic Programming (ICLP),

Imperial College London, UK

July 9 - 15, 2023

=======================================================================



AIMS AND SCOPE



Machine Ethics and Explainability are two recent topics that have been garnering significant attention and concern in recent years. This global concern has manifested itself in numerous initiatives at various levels. An intrinsic relationship exists between these two topics. It is insufficient for an autonomous agent to behave ethically; it must also be able to explain its behavior, necessitating both an ethical component and an explanation component. Moreover, explainable behavior is clearly unacceptable if it is not ethical (i.e., it does not adhere to societal ethical norms).

In many application domains, particularly those involving human lives and necessitating ethical decisions, users must comprehend the system recommendations in order to explain the reasoning behind their decisions to others. One of the most important ultimate goals of Explainable AI systems is achieving an efficient mapping between explainability and causality. Explainability refers to the system's ability to justify its actions in natural language to the average user. In other words, the system's capacity to articulate the reasons underlying its decisions is central to explainability.

However, when dealing with high-risk decision-making systems (ethical decisions), is it sufficient to merely explain the system's decisions to human users? Should we extend beyond the boundaries of the predictive model to observe cause and effect within the system?

A vast body of research on explainability attempts to clarify the output of some black-box models using various approaches. Some approaches endeavor to generate logical rules as explanations. Nonetheless, it is worth noting that most methods for generating post-hoc explanations are themselves based on statistical tools, which are subject to uncertainty or errors. Many post-hoc explainability techniques try to approximate deep-learning black-box models with simpler, interpretable models that can be inspected to explain the black-box models. However, these approximate models are not provably loyal to the original model, as there are always trade-offs between explainability and fidelity.

Conversely, a substantial number of researchers have employed inherently interpretable approaches to develop and implement their ethical autonomous agents. Many of these approaches are based on logic programming, ranging from deontic logics to non-monotonic logics and other formalisms.

Logic Programming (LP) holds significant potential in these two burgeoning research areas, as logic rules are easily understood by humans and promote causality, which is vital for ethical decision-making.

Despite the considerable interest machine ethics has received over the past decade, primarily from ethicists and AIexperts, the question "Are artificial moral agents possible?" remains unanswered. Several attempts have been made to implement ethical decision-making into intelligent autonomous agents using various approaches. However, no fully descriptive and universally acceptable model of moral judgment and decision-making exists to date. None of the developed solutions appear to be entirely convincing in providing trustworthy moral behavior. The same applies to explainability; although there is widespread concern about autonomous agents' explainability, current approaches do not seem satisfactory. Many questions remain unanswered in these two fascinating, rapidly expanding fields.

This workshop aims to convene researchers working on all aspects of machine ethics and explainability, including theoretical work, system implementations, and applications. By co-locating this workshop with ICLP, we also intend to encourage collaboration among researchers from different LP areas. This workshop offers a forum for facilitating discussions on these topics and fostering a productive exchange of ideas.

Topics of interest include, but are not limited to:

- New LP-based approaches to programming machine ethics;

- New LP-based approaches to explainability of black-box models;

- Evaluation and comparison of existing LP-based approaches;

- Approaches to verification of ethical behavior;

- LP applications in machine ethics;

- Integrating LP with methods for machine ethics;

- Integrating LP with methods for explainability;

- Neuro-symbolic AI for ethics/explainability.

SUBMISSIONS

The workshop invites two types of submissions:

- original papers describing original research.

- non-original paper already published in formal proceedings or journals.

Original papers:

- regular papers must not exceed 14 pages (including references)

- short papers must not exceed 7 pages (including references). Short papers are particularly suitable to present work in progress, extended abstracts, doctoral theses, or general overviews of research projects.

Authors are requested to clearly specify whether their submission is original or not with a footnote on the first page.

Authors are invited to submit their manuscripts in PDF via the EasyChair system at the link:

https://easychair.org/conferences/?conf=meandelp2023

Manuscripts must be formatted using the 1-column CEUR-ART Style (you can access the Overleaf template here). For more information, please see the CEUR website http://ceur-ws.org/HOWTOSUBMIT.html.

IMPORTANT DATES

Paper submission deadline: May 25, 2023

Author Notification: June 10, 2023

Camera-ready articles due: June 15, 2023

Workshop: 9-10 July 2023

PROCEEDINGS

Authors of all accepted original contributions can opt for to publish their work in formal proceedings.

Accepted non-original contributions will be given visibility on the workshop web site including a link to the original publication, if already published.

Accepted original papers will be published as CEUR workshops proceedings

LOCATION

Imperial College London, UK

WORKSHOP CHAIRS

Abeer Dyoub ( DISIM, University of L'Aquila, Italy).

Fabio Aurelio D’Asaro (Department of Human Science, University of Verona, Italy)

Francesca A. Lisi (DiB, University of Bari ”Aldo Moro”, Italy)

PROGRAM COMMITTEE

TBA

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
Ei/Scopus-ACAI 2024   2024 7th International Conference on Algorithms, Computing and Artificial Intelligence(ACAI 2024)
ICSTTE 2025   2025 3rd International Conference on SmartRail, Traffic and Transportation Engineering (ICSTTE 2025)
MLPR 2025   ACM--2025 The 3rd International Conference on Machine Learning and Pattern Recognition (MLPR 2025)
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
BEWARE 2024   3rd International Workshop on Emerging Ethical Aspects of AI
EXTRAAMAS 2024   EXplainable and TRAnsparent AI and Multi-Agent Systems