DSML 2026
Dependable and Secure Machine Learning

June 23, 2026
Co-located with the 56th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2026)
Charlotte, USA

Machine Learning (ML) is increasingly deployed in high-stake scenarios across our society, such as healthcare, autonomous systems, cloud infrastructures, and cyber-physical environments. The emergence of foundation models has further expanded the role of ML, enabling systems that not only make predictions but also generate content, code, and autonomous decisions while interacting seamlessly with humans and heterogeneous software ecosystems.

While prior research has primarily focused on improving model performance under controlled assumptions, real-world ML systems are often exposed to unexpected environmental conditions, such as distribution shifts, accidental faults, and adversarial manipulations. These challenges are amplified in Generative AI systems, which are subject to failures such as hallucinations and prompt injection that can propagate across system boundaries and undermine overall dependability.

Moreover, ML systems are commonly driven by high-performance computing hardware, which is inherently subject to hardware faults and attacks. In regulated domains, the lack of systematic methods to monitor and guarantee the dependability and security of ML-enabled systems remains a critical barrier to their trustworthy adoption.

These challenges necessitate the need to rigorously characterize and mitigate the unique dependability and security challenges in emerging ML systems, thereby ensuring their reliable operation across diverse environments.

Workshop Goals

The DSML workshop aims to provide an open forum for researchers, practitioners, and policymakers to discuss methods, tools, and experiences for building dependable and secure ML systems, with a particular emphasis on the synergy between traditional machine learning and large-scale foundation models, including their secure integration into system-level architectures. The main goals of the workshop are to:

  • advance the understanding of dependability, security, safety, and robustness in ML-enabled systems;
  • foster research on system-level evaluation, monitoring, and failure analysis of ML and Generative AI ecosystems;
  • promote interdisciplinary discussions bridging ML, dependable computing, security, and regulation;
  • strengthen the DSML community as a reference venue within DSN for research on trustworthy ML systems.