BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
DTSTART:20180325T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20181028T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260403T220441Z
UID:5b4ca7ff7905b326608901@ist.ac.at
DTSTART:20180727T153000
DTEND:20180727T163000
DESCRIPTION:Speaker: Susanne Still\nhosted by Nick Barton\nAbstract: Learni
 ng reveals itself to the world through actions\, which might reflect an ag
 ent's underlying goals. We know that when a specific goal is given in a we
 ll defined context\, then action strategies can be found as solutions to a
 n optimization problem. But goals are largely context-dependent\, and obje
 ctive functions often have to be constructed by hand\, and typically do no
 t generalize well. Clearly this approach is deeply problematic if we want 
 to understand learning and behavior: we should not have to make goals incr
 easingly complicated by hand\, to engineer complex behavior. But are there
  any simple\, over-arching principles governing agents in the physical wor
 ld? Perhaps the most basic\, and invariant\, agent goal" is self-continuat
 ion\, and a foundation for achieving that goal is maintaining a positive f
 ree energy balance. But can agents that are driven by nothing else than pu
 shing thermodynamic limits come up with information processing strategies 
 comparable to those we use for scientific reasoning?Surprisingly\, the ans
 wer is yes. Recent developments in far-from-equilibrium thermodynamics hav
 e allowed us to understand how optimizing for the efficient use of energy 
 leads to predictive inference. To allow for minimal dissipation\, an agent
  must retain predictive information [Still et al. PRL 109\, 120604 (2012)]
 . We can find a more general version of the same principle by modeling an 
 observer as part of an information engine. We ask: how should the observer
  best represent available observations to maximize the engine's overall th
 ermodynamic bill? We find that dissipation is proportional to the irreleva
 nt information kept by the observer. Thus\, pushing to minimize dissipatio
 n leads us to two rules" for information processing: (i) retain all releva
 nt\, predictive information\, and (ii) retain as little as possible beyond
  that. This insight allows us to derive\, from a very simple physical argu
 ment\, an algorithm that is widely used for lossy compression and machine 
 learning\, coined Information Bottleneck method". Curiously\, Tishby et al
 . recently argued that this same encoding strategy might be reflected in d
 eep neural networks. In summary\, the path may now be paved for deriving c
 omplex learning strategies and behavior straight from simple fundamental p
 hysical limits. Because these limits apply also to quantum systems\, this 
 physically driven approach may offer an entirely new window into quantum m
 achine learning.
LOCATION:Meeting room 1st floor / Central Bldg. (I01.1OG - Zentralgebäude)
 \, ISTA
ORGANIZER:abonvent@ist.ac.at
SUMMARY:Susanne Still: Learning strategies from fundamental physical limits
URL:https://talks-calendar.ista.ac.at/events/1324
END:VEVENT
END:VCALENDAR
