BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
DTSTART:20260329T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20261025T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260424T081435Z
UID:1776672000@ist.ac.at
DTSTART:20260420T100000
DTEND:20260420T110000
DESCRIPTION:Speaker: Eugenia Iofinova\nhosted by Samara Ren\nAbstract: As n
 eural-network-based models grow both in size and popularity\, interest has
  grown in making the models smaller and more efficient to train. To that e
 nd\, many methods have been proposed to prune models by reducing their num
 ber of nonzero parameters. Additionally\, parameter-efficient fine-tuning\
 , in which a much smaller number of parameters than the total contained in
  the model is updated during training\, has become very popular\, especial
 ly in the space of Large Language Models. At the same time\, the increasin
 gly routine deployment of machine learning in real-world applications has 
 spurred a drive to make them more trustworthy - in the sense of\, among ot
 her things\, being unbiased\, interpretable\, and editable. In this thesis
 \, we examine the interplay between efficiency and trustworthiness.First\,
  we analyze the effects of model pruning on bias in computer vision models
 \, demonstrating that increased sparsity leads to greater bias\, largely a
 s a function of increased model uncertainty in marginal cases. Based on th
 is observation\, we propose several bias mitigation techniques. Then\, we 
 demonstrate that example-specific model pruning can improve model interpre
 tation methods while improving pruning efficiency to make example-specific
  model pruning feasible in real time. Then\, we investigate the effectiven
 ess of parameter-efficient and data-efficient model personalization via fi
 ne-tuning\, demonstrating that it is highly feasible with very small compu
 tational and data resources. Finally\, we consider efficiency in editing m
 odel knowledge using a custom synthetic data framework\, demonstrating tha
 t parameter-efficient\, low-rank fine-tuning frequently outperforms full-r
 ank fine-tuning\, and\, additionally\, restricting fine-tuning to specific
  model blocks frequently improves results. Together\, the results in this 
 thesis provide new insights and techniques for combining trustworthiness a
 nd efficiency during neural network inference and training.
LOCATION:Office Bldg West / Ground floor / Heinzel Seminar Room (I21.EG.101
 ) and Zoom\, ISTA
ORGANIZER:
SUMMARY:Eugenia Iofinova: Thesis Defense: On the Utility and Effects of Eff
 iciency in Artificial Neural Networks
URL:https://talks-calendar.ista.ac.at/events/6391
END:VEVENT
END:VCALENDAR
