BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
DTSTART:20250330T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251026T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260424T143351Z
UID:1743422400@ist.ac.at
DTSTART:20250331T140000
DTEND:20250331T150000
DESCRIPTION:Speaker: Bernd Prach\nhosted by Michael Sammler\nAbstract: Desp
 ite generating remarkable results in various computer vision tasks\, deep 
 learning comes with some surprising shortcomings. For example\, tiny pertu
 rbations\, often imperceptible to the human eye\, can completely change th
 e predictions of image classifiers. Despite a decade of research\, the fie
 ld has made limited progress in developing image classifiers that are both
  accurate and robust. This thesis aims to address this gap.As our first co
 ntribution\, we simplify the process of training certifiably robust image 
 classifiers. We do this by designing a convolutional layer that does not r
 equire executing an iterative procedure in every forward pass\, but relies
  on an explicit bound instead. We also propose a loss function that allows
  optimizing for a particular margin more precisely.Next\, we provide an ov
 erview and comparison of various methods that create robust image classifi
 ers by constraining the Lipschitz constant. This is important since genera
 lly longer training times and more parameters improve the performance of r
 obust classifiers\, making it challenging to determine the most practical 
 and effective methods from existing literature.In 1-Lipschitz classificati
 on\, the performance of current methods is still much worse than what we e
 xpect on the simple tasks we consider. Therefore\, we next investigate pot
 ential causes of this shortcoming. We first consider the role of the activ
 ation function. We prove a theoretical shortcoming of the commonly used ac
 tivation function\, and provide an alternative without it. However this th
 eoretical improvement does barely translate to the empirical performance o
 f robust classifiers\, suggesting a different bottleneck.Therefore\, in th
 e final part\, we study how the performance depends on the amount of train
 ing data. We prove that in the worst case\, we might require far more data
  to train a robust classifier compared to a normal one. We furthermore fin
 d that the amount of training data is a key determinant of the performance
  current methods achieve on popular datasets. Additionally\, we show that 
 linear subspaces exist with tiny data variance\, and yet we can still trai
 n very accurate classifiers after projecting into those subspaces. This sh
 ows that on the datasets considered\, enforcing robustness in classificati
 on makes the task strictly more challenging.
LOCATION:Office Bldg West / Ground floor / Heinzel Seminar Room (I21.EG.101
 )  and Zoom\, ISTA
ORGANIZER:
SUMMARY:Bernd Prach: Thesis Defense: Robust Image Classification with 1-Lip
 schitz Networks
URL:https://talks-calendar.ista.ac.at/events/5675
END:VEVENT
END:VCALENDAR
