DSML 2019
Dependable and Secure Machine Learning


Workshop Program - Monday, 24 June 2019

08:00-09:00 Registration
09:00-09:15 Welcome to DSN-DSML 2019
Session 1: Keynote Talk
09:15-10:15

10:15-10:30
Machine Learning Security & Privacy - An Industry Perspective
Jason Martin, Intel Security Solutions Lab
Q&A
10:30-11:00 Coffee Break
Session 2: Adversarial Attacks and Defenses
11:00-11:20


11:20-11:40


11:40-12:00


12:00-12:20


12:20-12:35
Adversarial Video Captioning
Suman Kalyan Adari, Washington Garcia, Kevin R.B. Butler

Universal Adversarial Perturbations for Speech Recognition Systems
Paarth Neekhara, Shehzeen Samarah Hussain, Prakhar Pandey, Shlomo Dubnov, Julian McAuley, Farinaz Koushanfar

Malware Evasion Attack and Defense
Yonghong Huang, Utkarsh Verma, Celeste Fralick, Gabriel Infante-Lopez, Brajesh Kumar, Carl Woodward

Mixed Strategy Game Model Against Data Poisoning Attacks
Reza Samavi, YIFan OU

Short Talk - Towards the Realistic Evaluation of Evasion Attacks using CARLA
Cory Cornelius, Shang-tse Chen, Jason Martin, Polo Chau
12:35-14:00 Lunch Break
Session 3: Fault Tolerant and Attack Resilient Models
14:00-14:20


14:20-14:40


14:40-15:00


15:00-15:20


15:20-15:40
NV-DNN: Towards Fault-Tolerant DNN Systems with N-Version Programming
Hui Xu, Zhuangbin Chen, Weibin Wu, Zhi Jin, Sy-Yen Kuo, Michael R. Lyu

N-version Machine Learning Models for Safety Critical Systems
Fumio Machida

Novelty Detection via Network Saliency in Visual-based Deep Learning
Valerie Chen, Man-Ki Yoon, Zhong Shao

Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs
Arezoo Rajabi, Rakesh Bobba

Using Intuition from Empirical Properties to Simplify Adversarial Training Defense
Guanxiong Liu, Issa Khalil, Abdallah Khreishah
15:40-16:00 Coffee Break
Session 4: Keynote Talk
16:00-17:00

17:00-17:15
Towards Verified Artificial Intelligence
Sanjit A. Seshia, University of California, Berkeley
Q&A
17:15-17:30 Discussion and Closing
17:30-19:30 Conference Reception

Keynotes


Machine Learning Security & Privacy - An Industry Perspective
Jason Martin, Intel Security Solutions Lab

Speaker Bio:
Jason Martin is a Senior Staff Research Scientist in the Security Solutions Lab and manager of the Secure Intelligence Team at Intel Labs. He leads a team of diverse researchers to investigate machine learning security in a way that incorporates the latest research findings and Intel products. Jason’s interests include machine learning, authentication and identity, trusted execution technology, wearable computing, mobile security, and privacy. Prior to Intel labs he spent several years as a security researcher performing security evaluations and penetration tests on Intel’s products. Jason is a co-inventor on 19 patents and received his BS in Computer Science from the University of Illinois at Urbana-Champaign.


Towards Verified Artificial Intelligence
Sanjit A. Seshia, University of California, Berkeley

Speaker Bio:
Sanjit A. Seshia is a Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He received an M.S. and Ph.D. in Computer Science from Carnegie Mellon University, and a B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Bombay. His research interests are in formal methods for dependable and secure computing, with a current focus on the areas of cyber-physical systems, computer security, and robotics. He has made pioneering contributions to the areas of satisfiability modulo theories (SMT), SMT-based verification, and inductive program synthesis. He is co-author of a widely-used textbook on embedded, cyber-physical systems and has led the development of technologies for cyber-physical systems education based on formal methods. His awards and honors include a Presidential Early Career Award for Scientists and Engineers (PECASE), an Alfred P. Sloan Research Fellowship, and the Frederick Emmons Terman Award for contributions to electrical engineering and computer science education. He is a Fellow of the IEEE.