DSML 2021
Dependable and Secure Machine Learning

Monday, 21 June 2021
Co-located with the 51st IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2021)
Taipei, Taiwan

Important Announcement

Due to COVID-19, DSML-2021 will be held virtually.

Machine learning (ML) is increasingly used in critical domains such as health and wellness, criminal sentencing recommendations, commerce, transportation, human capital management, entertainment, and communication. The design of ML systems has mainly focused on developing models, algorithms, and datasets on which they are trained to demonstrate high accuracy for specific tasks such as object recognition and classification. Machine learning algorithms typically construct a model by training on a labeled training dataset and their performance is assessed based on the accuracy in predicting labels for unseen (but often similar) testing data. This is based on the assumption that the training dataset is representative of the inputs that the system will face in deployment. However, in practice there are a wide variety of unexpected accidental, as well as adversarially-crafted, perturbations on the ML inputs that might lead to violations of this assumption. ML algorithms are also often over-confident about their predictions when processing such unexpected inputs. This makes it difficult to deploy them in safety critical settings where one needs to be able to rely on the ML predictions to make decisions or revert back to a failsafe mode. Further, ML algorithms are often executed on special-purpose hardware accelerators, which may themselves be subject to faults. Thus, there is a growing concern regarding the reliability, safety, security, and accountability of machine learning systems.

The DSN Workshop on Dependable and Secure Machine Learning (DSML) is an open forum for researchers, practitioners, and regulatory experts, to present and discuss innovative ideas and practical techniques and tools for producing dependable and secure ML systems. A major goal of the workshop is to draw the attention of the research community to the problem of establishing guarantees of reliability, security, safety, and robustness for systems that incorporate increasingly complex ML models, and to the challenge of determining whether such systems can comply with requirements for safety-critical systems. A further goal is to build a research community at the intersection of machine learning and dependable and secure computing.