DSML 2019
Dependable and Secure Machine Learning


Workshop Program - Monday, 24 June 2019

08:00-09:00 Registration
09:00-09:15 Welcome to DSN-DSML 2019
Homa Alemzadeh, University of Virginia
Session 1: Keynote Talk
Session Chair: Rakesh Bobba, Oregon State University
09:15-10:15

10:15-10:30
Machine Learning Security & Privacy - An Industry Perspective
Jason Martin, Intel Security Solutions Lab
Q&A
10:30-11:00 Coffee Break
Session 2: Adversarial Attacks and Defenses
Session Chair: Weilin Xu, Intel Security Solutions Lab
11:00-11:20


11:20-11:40


11:40-12:00


12:00-12:20


12:20-12:35
Adversarial Video Captioning
Suman Kalyan Adari, Washington Garcia, Kevin R.B. Butler

Universal Adversarial Perturbations for Speech Recognition Systems [PDF]
Paarth Neekhara, Shehzeen Samarah Hussain, Prakhar Pandey, Shlomo Dubnov, Julian McAuley, Farinaz Koushanfar

Malware Evasion Attack and Defense [PDF]
Yonghong Huang, Utkarsh Verma, Celeste Fralick, Gabriel Infante-Lopez, Brajesh Kumar, Carl Woodward

Mixed Strategy Game Model Against Data Poisoning Attacks [PDF]
Yifan Ou, Reza Samavi

Short Talk - Towards the Realistic Evaluation of Evasion Attacks using CARLA
Cory Cornelius, Shang-tse Chen, Jason Martin, Polo Chau
12:35-14:00 Lunch Break
Session 3: Fault Tolerant and Attack Resilient Models
Session Chair: Guanpeng (Justin) Li, University of Iowa
14:00-14:20


14:20-14:40


14:40-15:00


15:00-15:20


15:20-15:40
NV-DNN: Towards Fault-Tolerant DNN Systems with N-Version Programming
Hui Xu, Zhuangbin Chen, Weibin Wu, Zhi Jin, Sy-Yen Kuo, Michael R. Lyu

N-version Machine Learning Models for Safety Critical Systems
Fumio Machida

Novelty Detection via Network Saliency in Visual-based Deep Learning [PDF]
Valerie Chen, Man-Ki Yoon, Zhong Shao

Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs
Arezoo Rajabi, Rakesh Bobba

Using Intuition from Empirical Properties to Simplify Adversarial Training Defense
Guanxiong Liu, Issa Khalil, Abdallah Khreishah
15:40-16:00 Coffee Break
Session 4: Keynote Talk
Session Chair: Karthik Pattabiraman, University of British Columbia
16:00-17:00

17:00-17:15
Towards Verified Artificial Intelligence
Sanjit A. Seshia, University of California, Berkeley
Q&A
17:15-17:30 Discussion and Closing
17:30-19:30 Conference Reception

Keynotes


Machine Learning Security & Privacy - An Industry Perspective
Jason Martin, Intel Security Solutions Lab

Abstract: Threats follow value. This phenomenon has driven security needs for thousands of years, and information security is no different. Machine learning technologies promise to unlock the value of data, including high risk (ADAS), high reward (finance), and personal (biometric) computing, requiring us to protect that emerging value from new threats. In this talk I will describe our approach to threat modeling machine learning systems and how those threat models have led to our machine learning security and privacy research projects.

Speaker Bio: Jason Martin is a Senior Staff Research Scientist in the Security Solutions Lab and manager of the Secure Intelligence Team at Intel Labs. He leads a team of diverse researchers to investigate machine learning security in a way that incorporates the latest research findings and Intel products. Jason’s interests include machine learning, authentication and identity, trusted execution technology, wearable computing, mobile security, and privacy. Prior to Intel labs he spent several years as a security researcher performing security evaluations and penetration tests on Intel’s products. Jason is a co-inventor on 19 patents and received his BS in Computer Science from the University of Illinois at Urbana-Champaign.


Towards Verified Artificial Intelligence
Sanjit A. Seshia, University of California, Berkeley

Abstract: The deployment of artificial intelligence (AI), particularly of systems that learn from data and experience, is rapidly expanding in our society. Verified artificial intelligence (AI) is the goal of designing AI-based systems that have strong, verified assurances of correctness with respect to mathematically-specified requirements. In this talk, I will consider Verified AI from a formal methods perspective. I will describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges. I will illustrate these challenges and principles with examples and sample results from the domain of intelligent cyber-physical systems, with a particular focus on autonomous vehicles.

Speaker Bio: Sanjit A. Seshia is a Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He received an M.S. and Ph.D. in Computer Science from Carnegie Mellon University, and a B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Bombay. His research interests are in formal methods for dependable and secure computing, with a current focus on the areas of cyber-physical systems, computer security, and robotics. He has made pioneering contributions to the areas of satisfiability modulo theories (SMT), SMT-based verification, and inductive program synthesis. He is co-author of a widely-used textbook on embedded, cyber-physical systems and has led the development of technologies for cyber-physical systems education based on formal methods. His awards and honors include a Presidential Early Career Award for Scientists and Engineers (PECASE), an Alfred P. Sloan Research Fellowship, and the Frederick Emmons Terman Award for contributions to electrical engineering and computer science education. He is a Fellow of the IEEE.