DSML 2026
June 23, 2026 |
![]() |
|
Machine Learning (ML) is increasingly deployed in high-stake scenarios across our society, such as healthcare, autonomous systems, cloud infrastructures, and cyber-physical environments. The emergence of foundation models has further expanded the role of ML, enabling systems that not only make predictions but also generate content, code, and autonomous decisions while interacting seamlessly with humans and heterogeneous software ecosystems. While prior research has primarily focused on improving model performance under controlled assumptions, real-world ML systems are often exposed to unexpected environmental conditions, such as distribution shifts, accidental faults, and adversarial manipulations. These challenges are amplified in Generative AI systems, which are subject to failures such as hallucinations and prompt injection that can propagate across system boundaries and undermine overall dependability. Moreover, ML systems are commonly driven by high-performance computing hardware, which is inherently subject to hardware faults and attacks. In regulated domains, the lack of systematic methods to monitor and guarantee the dependability and security of ML-enabled systems remains a critical barrier to their trustworthy adoption. These challenges necessitate the need to rigorously characterize and mitigate the unique dependability and security challenges in emerging ML systems, thereby ensuring their reliable operation across diverse environments. Workshop GoalsThe DSML workshop aims to provide an open forum for researchers, practitioners, and policymakers to discuss methods, tools, and experiences for building dependable and secure ML systems, with a particular emphasis on the synergy between traditional machine learning and large-scale foundation models, including their secure integration into system-level architectures. The main goals of the workshop are to:
|