posted by user: tdis_workshop || 63 views || tracked by 1 users: [display]

TDIS 2025 : 3rd International Workshop on Testing Distributed Internet of Things Systems (TDIS 2025)

FacebookTwitterLinkedInGoogle

Link: https://tdis.gitlab.io/tdis25/
 
When Mar 31, 2025 - Mar 31, 2025
Where Rotterdam, Netherlands
Submission Deadline Jan 24, 2025
Notification Due Feb 21, 2025
Final Version Due Mar 3, 2025
Categories    cloud computing   distributed systems   internet of things   machine learning
 

Call For Papers

The Internet of Things (IoT), cloud computing, and machine learning will allow more adaptive and responsive services for our cities, houses, and infrastructures. Yet, this vision of intelligent cyber-physical systems will not be implemented with centralized cloud resources alone. Such resources are simply too far away from sensor-equipped IoT devices, leading to high latencies, bandwidth bottlenecks, and unnecessary energy consumption. Additionally, there are often privacy and safety requirements mandating distributed architectures. Therefore, new distributed computing system architectures have emerged with edge and fog computing, bringing computing and storage closer to devices to realize in situ IoT services and in some cases extend AI deployment right to the edge of the network.

The resulting heterogeneous, distributed, and dynamic environments make it difficult to implement dependable, performant, and efficient IoT systems. Furthermore, it is far less clear how IoT systems can be sufficiently tested before they are deployed to control our cities, homes, and infrastructures. Yet, continuous testing in realistic test environments is essential for such systems. For instance, IoT systems might be deployed to continuously optimize the operation of critical urban infrastructures, including public transport systems, energy grids, water networks, and medical services. New versions of such IoT systems must be evaluated thoroughly before they can be deployed and relied on. Furthermore, the behavior of such IoT systems must be tested under the expected computing environment conditions, including variations of such conditions, given the inherently unsteady nature of IoT environments.

TDIS 2025 will provide a forum for ongoing work presentations and discussions on topics such as test frameworks, hybrid testbeds, (co-)simulation, testing automation and methodologies, monitoring, benchmarking, metrics, resilience testing, energy and emissions testing, data integrity and bias testing, test tool usability, as well as IoT applications and empirical evaluations. We welcome submissions that describe initial ideas and visions just as much as reports on preliminary results, practical tools, and completed projects.

Topics of Interest
Topics of interest include but are not limited to:

Physical and hybrid IoT testbeds
- Heterogeneous and distributed IoT device testbeds
- Containerization and virtualization in IoT testing
- Scalability and flexibility in testbed design for diverse IoT scenarios

Emulation and (co-)simulation test frameworks
- Emulation frameworks that mimic real-world conditions
- Simulating realistic network conditions
- Co-simulation of IoT environment conditions and application domains

Testing practices and methodologies
- Integrating IoT testing in continuous deployment pipelines
- Automated and AI-based testing across IoT-edge-cloud environments
- Representativeness, reproducibility, and repeatability of experiments, benchmarks, and tests in edge/fog/cloud computing environments

Monitoring, benchmarking, and metrics
- (Low overhead) monitoring, tracing, and profiling for distributed IoT systems
- Benchmark suites for evaluating IoT performance
- Metrics for assessing performance, dependability, or usability of IoT applications

Testing energy efficiency, carbon awareness, and resource usage
- Techniques for measuring or modeling energy consumption in IoT systems
- Test frameworks, testbeds, and datasets for evaluating carbon-aware computer systems
- Testing of resource usage and resource management strategies in edge-to-cloud computing environments

Load and resilience testing
- Load/stress testing to ensure system robustness
- Security testing and resilience assessment in connected devices
- Techniques for automated failure injection in distributed IoT systems (e.g. implementing chaos engineering practices)

Testing data quality assurance and privacy preservation techniques
- Assessing methods to ensure data integrity and authenticity in IoT communications
- Testing the detection and mitigation of biases in data-driven IoT applications
- Evaluating privacy-preserving techniques in IoT data collection and analysis

Usability of testbeds, testing frameworks, benchmarks, and experimentation tools
- Evaluating interfaces of IoT systems
- User-centric evaluation methodologies for IoT applications
- Incorporating user feedback into IoT system development and testing

Empirical evaluations of IoT applications
- Empirical analysis of IoT system performance and user experiences
- Lessons learned from implementing and/or deploying large-scale IoT systems in real-world settings
- Cross-disciplinary studies that integrate IoT technologies into specific domains

Author Instructions and Submission Guidelines
TDIS welcomes the submission of original and novel work focusing on the workshop’s topics of interest. Papers are to be written in English and may describe initial ideas and visions, preliminary results, practical tools, and/or completed projects. Authors are encouraged to include links to datasets, code repositories, and other artifacts of their submitted work. Authors are invited to submit both full and short papers, while adhering to the following guidelines.

Full papers should be no longer than 6 pages and short papers no longer than 4 pages. The maximum page limit includes all text, figures, tables, and references. Submissions must be prepared for A4 paper and formatted with double-column and 10-pt font (typeface Times Roman, Linux Libertine). When preparing your paper submission in PDF format it is highly recommended to use the ACM SIGPLAN templates (either Latex or MS Word), so that the aforementioned instructions are automatically applied (for Latex: \documentclass[sigplan,twocolumn,review]{acmart}).

We will soon add a link to the TDIS 2025 submission site at HotCRP.

All submissions will be reviewed by our program committee following a single-blind reviewing process. Accepted papers will be published in the ACM Digital Library. At least one of the authors of every accepted paper will need to register and present the paper at the workshop.

Before submitting your work, please review the updated ACM Policy on Authorship, which contains information on the use of generative AI in preparing submissions.

Important Dates
- workshop paper submission: January 24, 2025
- notification of acceptance: February 21, 2025
- camera-ready submission: TBC
- workshop day: March 31, 2025

Related Resources

IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
IEEE ICBDA 2025   IEEE--2025 the 10th International Conference on Big Data Analytics (ICBDA 2025)
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE BDAI 2025   IEEE--2025 the 8th International Conference on Big Data and Artificial Intelligence (BDAI 2025)
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
IEEE CSR 2025   2025 IEEE International Conference on Cyber Security and Resilience
ACIJ 2025   Advanced Computing: An International Journal
VALID 2025   The Seventeenth International Conference on Advances in System Testing and Validation Lifecycle
ICCCI--EI 2025   2025 7th International Conference on Computer Communication and the Internet (ICCCI 2025)