|Telephone:||+49 89 3603522 569|
fortiss · Landesforschungsinstitut des Freistaats Bayern
Have you seen this image before? Are you curious? [source]
My main area of research is Adversarial Examples for Neural Networks. Adversarial Examples are a fundamental problem observed in Deep Learning, and potentially a huge safety and security issue. Plus, they're a lot of fun to work with!
Recently, my colleagues and I won a prize in the prestigious NeurIPS 2018 Adversarial Vision Challenge. Now I have loads of new ideas, but not enough time. So if you know about Deep Learning and are looking for a Thesis topic - don't hesitate to get in touch!
Currently I don't have any open topics. However if you are motivated and interested - just send me an email and we can work something out! Please include your CV and GitHub profile.
- Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks. The IEEE International Conference on Computer Vision (ICCV), 2019 more…
- Leveraging Semantic Embeddings for Safety-Critical Applications. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019 more…
- Towards Graph Pooling by Edge Contraction. ICML 2019 Workshop on Learning and Reasoning with Graph-Structured Data, 2019 more…
- Graph Neural Networks for Modelling Traffic Participant Interaction. IEEE Intelligent Vehicles Symposium 2019, 2019 more…
- Bridging the Gap between Open Source Software and Vehicle Hardware for Autonomous Driving. 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2019 more…
- Uncertainty Estimation for Deep Neural Object Detectors in Safety-Critical Applications. International Conference on Intelligent Transportation Systems 2018, 2018 more…