Skip to main content

Robust, Interpretable
Deep Learning
Systems

November 20, 2018 | 2:30pm-6:30pm
Building 46 Atrium and Singleton Auditorium

Robust, Interpretable Deep Learning Systems

To advance further, deep learning systems need to become more transparent. They will have to prove they are reliable, can withstand malicious attacks, and can explain the reasoning behind their decisions, especially in safety-critical applications like self-driving cars.

We invite MIT undergraduates, graduate students and postdoctoral scholars to submit current research abstracts to the Quest Symposium on Robust, Interpretable Deep Learning Systems to be held on November 20, 2018. We welcome submissions on attack and defense methods for deep neural networks, visualizations, interpretable modeling, and other methods for revealing deep network behavior, structure, sensitivities, and biases.

The event will feature faculty talks, poster sessions and refreshments.

Speakers

Schedule

Schedule

2:30 PM - 2:55 PM

Aleksander Madry, "Robustness and Interpretability" Singleton Auditorium.

2:55 PM - 3:25 PM

Tommi Jaakkola, "Interpretability and functional transparency" Singleton Auditorium.

3:25 PM - 4:30 PM

Stefanie Jegelka, "Robustness in GANs and Black-Box Optimization" Singleton Auditorium.

4:30 PM - 4:55 PM

David Sontag "Challenges and Dangers of Machine Learning in Health Care". Singleton Auditorium.

4:55 PM - 5:15 PM

Luca Daniel. "Evaluating the Robustness of Neural Networks" Singleton Auditorium.

5:15 PM - 5:40 PM

Antonio Torralba. "Dissecting neural networks" Singleton Auditorium.

5:40 PM - 6:30 PM

Poster Session B. Building 46 Atrium

Workshop Organizers

David Bau, Jun-Yan Zhu, Shibani Santurkar, Aleksander Madry, Antonio Torralba, Fern Keniston

Questions to: ridl@csail.mit.edu
Information at: ridl.csail.mit.edu