BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
DTSTART:20240331T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231029T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260424T062956Z
UID:6537c4aef39cf397215420@ist.ac.at
DTSTART:20231031T140000
DTEND:20231031T150000
DESCRIPTION:Speaker: Ohad Shamir\nhosted by Christoph Lampert\nAbstract: Mo
 st practical algorithms for supervised machine learning boil down to optim
 izing the average performance over a training dataset. However\, it is inc
 reasingly recognized that although the optimization objective is the same\
 , the manner in which it is optimized plays a decisive role in the propert
 ies of the resulting predictor. For example\, when training large neural n
 etworks\, there are generally many weight combinations that will perfectly
  fit the training data. However\, gradient-based training methods somehow 
 tend to reach those which\, for example\, do not overfit\; are brittle to 
 adversarially crafted examples\; or have other interesting properties. In 
 this talk\, I'll describe several recent theoretical and empirical results
  related to this question.
LOCATION:Central Bldg / O1 / Mondi 2a (I01.O1.008)\, ISTA
ORGANIZER:kharppre@ist.ac.at
SUMMARY:Ohad Shamir: ELLIS Talk: Implicit bias in machine learning
URL:https://talks-calendar.ista.ac.at/events/4531
END:VEVENT
END:VCALENDAR
