14 - 15 DECEMBER 2017 NUS, Singapore

Deep Learning Security Workshop

Exploring the intersection of deep learning and security

About the workshop

Exploring the bleeding edge intersection between Deep Learning and Security



Deep learning has made huge advances and impact in many areas of computer science such as vision, speech, NLP, and robotics. Many exciting research questions lie in the intersection of security and deep learning.

First, how will these deep learning systems behave in the presence of adversaries? Research has shown that many of the state-of-the-art deep learning systems can be easily fooled by adversarial examples. We will explore fundamental questions in this area including what types of attacks are possible on deep learning systems, why they exist, and how we can defend against them.

Second, how can deep learning techniques help security applications? We will explore this area and study example security applications using deep learning techniques including program binary analysis, password security analysis, malware detection and fraud detection.

This year we will also have a Research Forum on Dec 14. Submission deadline is on Nov 5, 2017.



Workshop Co-chairs


John Doe

Dawn Song

Professor

University of California, Berkeley

John Doe

Prateek Saxena

Assistant Professor

National University of Singapore



Take a look at the details of the Research Forum!



Research Forum Info

Speakers

Meet our invited speakers



Ian Fischer

Ian Fischer

Researcher

Google Research

John Whaley

John Whaley

Founder, CEO

UnifyID

Le Song

Le Song

Associate Professor / Principle Engineer

Georgia Institute of Technology / Ant Financial

Liang Shi

Liang Shi

Staff Expert, Manager of Security Data Science team

Alibaba Cloud Security

Min Ye

Min Ye

Senior Security Expert

Alibaba Cloud Security

Reza Shokri

Rezar Shokri

Assistant Professor

National University of Singapore

Tianlong Liu

Tianlong Liu

Senior Algorithm Engineer

Alibaba Cloud Security

Schedule

Our programme for the event



Introduction Abstract

Introduction Presenter

Networks and graphs are prevalent in many real world applications such as online social networks, transactions in payment platforms, user-item interactions in recommendation systems and relational information in knowledge bases. The availability of large amount of graph data has posed great new challenges. How to represent such data to capture similarities or differences between involved entities? How to learn predictive models and perform reasoning based on a large amount of such data? Previous deep learning models, such as CNN and RNN, are designed for images and sequences, but they are not applicable to graph data.

In this talk, I will present an embedding framework, called Structure2Vec, for learning representation for graph data in an end-to-end fashion. Structure2Vec provides a unified framework for integrating information from node characteristics, edge features, heterogeneous network structures and network dynamics, and linking them to downstream supervised and unsupervised learning, and reinforcement learning. I will also discuss several applications in security analytics where Structure2Vec leads to significant improvement over previous state-of-the-arts.

Le Song
Session I : Deep Learning for Security

Most enterprises provide Web services open to the public and thus are prone to Web attacks. Among these attacks, SQL Injection (SQLi) is a very serious threat, which could lead to catastrophic data leaking and loss. Lots of research efforts have been devoted into SQLi detection areas thereafter and heuristic rule based Web Application Firewall (WAF) was invented and deployed for protections. However, subtle and carefully crafted SQLi attacks can still easily hide themselves from being detected. Deep learning methods were introduced in recent years and they have shown a great strength to learn and describe complex semantic contents. In this talk, we present a CNN based SQLi detection implementation, which also has an ability to propose locations of suspect attack payloads within an URL request. Experimental results show that our implementation outperforms a leading method called Libinjection and our current version Alibaba Cloud WAF.

Liang Shi, Tianlong Liu, Min Ye

The proliferation of interconnected sensors all around us opens up new opportunities and challenges in machine learning, security, and authentication. This talk covers our real world experience in developing an implicit authentication platform using deep learning on sensor data and user behavior. Humans are creatures of habit and there are unique behavioral aspects in the way people walk, type, and move that are both unique and consistent. We have developed a solution that can achieve a high level of accuracy (>99.999%) based purely on passive factors, without requiring any conscious user action.

Developing a implicit authentication solution that works in the real world requires a creative application of machine learning as well as new innovations and techniques. We will cover some of the challenges that we have encountered applying deep learning techniques to user authentication in the real world.

John Whaley

Info about lightning talks

Info about poster presentations

Session II : Security for Deep Learning

I will talk about what machine learning privacy is, and will discuss how and why machine learning models leak information about the individual data records on which they were trained. My quantitative analysis will be based on the fundamental membership inference attacks: given a data record and (black-box) access to a model, determine if a record was in the model’s training set. I will demonstrate how to build such inference attacks on different classification models e.g., trained by commercial “machine learning as a service” providers such as Google and Amazon.

Reza Shokri

Deep Learning models are vulnerable to adversarial attacks, which can reliably cause the models to misbehave, for example convincing an image classifier that an image of a cat is actually a dog. We will discuss some of the recent attacks and attempts at defending against such attacks, and look at how adversarial attacks may be harnessed to improve the robustness of Deep Learning models.

Ian Fischer

Info about lightning talks

Register for the Deep Learning Security Workshop!



Register

Co-organizer



Sponsors


Attila Cybertech
BHA 2018
Cloak

Huawei
Insider Security
Parasoft
PayPal
SecureAge Technology

Location

  • NUS School of Computing, COM1, 13 Computing Drive, Singapore 117417
  • vivy@comp.nus.edu.sg