Crunch, Data Conference, October 16-18, 2019 Budapest
Irina Vidal Migallón

Irina Vidal Migallón

Technical Lead - AI & Computer Vision at Siemens Mobility GmbH

Bio:

Irina is an Electrical Engineer & Biomedical Engineer who specialised in Machine Learning & Vision. Seasoned in different industries —from optical biopsy systems in France to surgical planning tools and Augmented Reality apps in the Berlin start-up scene—, she now works in Siemens Mobility's Computer Vision & AI team. Even more than waking up Skynet, she's interested in the limits of Natural Intelligence and its decisions over our data.

Talk:

Using adversarial samples to robustify your Neural Network Models

Topics:
artificial intelligence
computer vision
deep learning
machine learning
Level:
Intermediate

Industrial Computer Vision systems rely on Deep Neural Networks — also in production. If we should poke our code until it breaks, why would deep learning models get a free pass? We'll see different ways in which to poke our models and improve them, from the practitioner's point of view, who has access to the model. 

Attacks keep improving and getting more sophisticated, but that doesn't mean that practitioners cannot aim at improving models with the resources they have: from very basic techniques applicable from day 1 to sophisticated adversarial training. There is much that can be done to poke holes, expose biases and weaknesses, understand them and with all this improve your models before letting them loose.