BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
DTSTART:20220327T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211031T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260404T224642Z
UID:1637766000@ist.ac.at
DTSTART:20211124T160000
DTEND:20211124T180000
DESCRIPTION:Speaker: Qi Lei\nhosted by Marco Mondelli\nAbstract: Modern mac
 hine learning models are transforming applications in various domains at t
 he expense of a large amount of hand-labeled data. In contrast\, humans an
 d animals first establish their concepts or impressions from data observat
 ions. The learned concepts then help them to learn specific tasks with min
 imal external instructions. Accordingly\, we argue that deep representatio
 n learning seeks a similar procedure: 1) to learn a data representation th
 at filters out irrelevant information from the data\; 2) to transfer the d
 ata representation to downstream tasks with few labeled samples and simple
  models. In this talk\, we study two forms of representation learning: sup
 ervised pre-training from multiple tasks and self-supervised learning.Supe
 rvised pre-training uses a large labeled source dataset to learn a represe
 ntation\, then trains a simple (linear) classifier on top of the represent
 ation. We prove that supervised pre-training can pool the data from all so
 urce tasks to learn a good representation that transfers to downstream tas
 ks (possibly with covariate shift) with few labeled examples. We extensive
 ly study different settings where the representation reduces the model cap
 acity in various ways. Self-supervised learning creates auxiliary pretext 
 tasks that do not require labeled data to learn representations. These pre
 text tasks are created solely using input features\, such as predicting a 
 missing image patch\, recovering the color channels of an image\, or predi
 cting missing words. Surprisingly\, predicting this known information help
 s in learning a representation useful for downstream tasks. We prove that 
 under an approximate conditional independence assumption\, self-supervised
  learning provably learns representations that linearly separate downstrea
 m targets. For both frameworks\, representation learning provably and dras
 tically reduces sample complexity for downstream tasks.
LOCATION:Zoom Link: https://istaustria.zoom.us/j/98066215937?pwd=YmZoWDAzME
 13dC9LU0Jwc1kzWVphQT09  Meeting ID: 980 6621 5937 Passcode: 177564\, ISTA
ORGANIZER:
SUMMARY:Qi Lei: Provable Representation Learning: The Importance of Task Di
 versity and Pretext Tasks
URL:https://talks-calendar.ista.ac.at/events/3329
END:VEVENT
END:VCALENDAR
