| |||||||||||||||
SAFE 2023 : Workshop on Explainable and Safety Bounded, Fidelitous, Machine Learning for Networking @CONEXT 2023 | |||||||||||||||
Link: https://safeworkshop.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
This workshop will be held as a part of the CoNEXT 2023 conference from Paris, 5 - 8 December, 2023.
Machine learning techniques are becoming increasingly popular in the field of networking. It offers promising solutions for network optimization, security, and management. However, the lack of transparency and interpretability in machine learning models poses challenges for understanding and trusting their decisions in critical networking scenarios. Moreover, ensuring safety and reliability is of utmost importance when deploying machine learning in real-world network environments. Control and decision-making algorithms are critical for the operation of networks, hence we believe that the solutions should be safety bounded and interpretable. Understanding the decisions and behaviors of machine learning models is crucial for optimizing network performance, enhancing security, and ensuring reliable network operations. This is a very crucial topic which needs to be addressed, as network operators, managers or administrators are reluctant to use ML in production networks because of their critical and sensitive nature, e.g., as outages and performance degradations can be very costly. We invite original research contributions as well as position papers addressing, but not limited to, the following topics: - Explainable machine learning models for network performance optimization - Interpretable anomaly detection and intrusion detection in networking systems - Safety considerations and techniques for robust and reliable machine learning in networking - Fairness, accountability, and transparency in machine learning for networking - Hybrid models which combine formal methods and AI for explainability - Explainable reinforcement learning for networking - Explainable deep reinforcement learning for networking - Safety bounded reinforcement learning for networking - Explainable Graph neural networks for networking - Explainable sequential decision-making - Constraints-based explanations for networking - Visualizations and tools for understanding and interpreting machine learning models in networking - Case studies and real-world applications of explainable and safety bounded machine learning in networking - Evaluation methods for explainable machine learning - Fidelity of explainable machine learning methods Submission procedure: Papers should be submitted via https://conext23-safe.hotcrp.com for more details please see the webpage https://safeworkshop.github.io/posts/submission/ Organising committee: - Kamal Singh, University St-Etienne, France - Abbas Bradai, University of Poitiers, France - Pham Tran Anh Quang, Huawei Technologies, France - Antonio Pescapè, University of Napoli Federico II, Italy - Claudio Fiandrino, IMDEA Networks Institute, Madrid, Spain |
|