DSML 2018
Dependable and Secure Machine Learning

Co-located with the 48th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2018)
Luxembourg City, 25–28 June 2018

Machine learning (ML) is increasingly used in critical domains such as health and wellness, criminal sentencing recommendations, commerce, transportation, human capital management, entertainment, and communication. The design of ML systems has mainly focused on developing models, algorithms, and datasets on which they are trained to demonstrate high accuracy for specific tasks such as object recognition and classification. Machine learning algorithms typically construct a model by training on a labeled training dataset and their performance is assessed based on the accuracy in predicting labels for unseen (but often similar) testing data. This is based on the assumption that the training dataset is representative of the inputs that the system will face in deployment. However, in practice there are a wide variety of unexpected accidental, as well as adversarially-crafted, perturbations on the ML inputs that might lead to violations of this assumption. Further, ML algorithms are often executed on special-purpose hardware accelerators, which may themselves be subject to faults. Thus, there is a growing concern regarding the reliability, safety, security, and accountability of machine learning systems.

The DSML workshop is intended to provide an open forum for researchers, practitioners, and the regulatory experts, to present and discuss innovative ideas and practical techniques and tools for producing dependable and secure ML systems. A major goal of the workshop is to draw the attention of the research community to the problem of establishing guarantees of reliability, security, safety, and robustness for systems that incorporate increasingly complex machine learning models, and to the challenge of determining whether such systems can comply with requirements set by regulations and standards for safety-critical systems.

Topics of interest include:
  • Testing, certification, and verification of ML models and algorithms
  • Metrics for benchmarking the robustness of ML systems
  • Adversarial machine learning (attacks and defenses)
  • Resilient and repairable ML models and algorithms
  • Reliability and security of ML architectures, computing platforms, and distributed systems
  • Faults in implementation of ML algorithms and their consequences
  • Dependability of ML accelerators and hardware platforms
  • Safety and societal impact of machine learning

Image credit: Wolfgang Staugt

Organizers

Homa Alemzadeh (University of Virginia)
Karthik Pattabiraman (University of British Columbia)
David Evans (University of Virginia)