BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
DTSTART:20230326T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231029T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260425T051621Z
UID:1687167000@ist.ac.at
DTSTART:20230619T113000
DTEND:20230619T123000
DESCRIPTION:Speaker: Steven Skiena\nhosted by Herbert Edelsbrunner\nAbstrac
 t: Distributed word embeddings (e.g. word2vec) provide a powerful way to 
 reduce large text corpora to concise features (vectors) readily applicable
  to a variety of problems in NLP and data science.   I will introduce wo
 rd embeddings\, and apply them in variety of new and interesting direction
 s\, including:(1) Detecting Historical Shifts in Word Meaning -- Words l
 ike "gay" and "mouse" have substantially shifted their meanings over time 
 in response to societal and technological changes.  We use word embeddi
 ngs trained over texts drawn from different time periods to detect change
 s in word meanings.    This is part of our efforts in historical trend
 s analysis.(2) Feature Extraction from Graphs --  We present DeepWalk\, o
 ur approach for learning latent representations of vertices in a network\,
  which has become extremely popular.   DeepWalk uses local information on
  truncated random walks to learn embeddings\, by treating walks as the eq
 uivalent of sentences in a language.  It is suitable for a broad class of
  applications such as network classification and anomaly detection.  We
  also introduce new graph embedding techniques based on random projections
 \, which produce DeepWalk-quality embeddings thousands of times faster tha
 n previous algorithms.(3) Processes for Language and Knowledge Creation --
  Can we uncover principles suggesting how vocabularies and other cultural 
 concepts evolve\, by studying the structure of their embedding spaces?  
  We show that generative processes like preferential placement create poi
 nt sets with properties suggestive of word embeddings.================Biog
 raphy: Steven Skiena is Distinguished Teaching Professor of Computer Sc
 ience and Director of the Institute for AI-Driven Discovery and Innovation
  at Stony Brook University.  His research interests include data science
 \, bioinformatics\, and algorithms.  He is the author of six books\, inc
 luding "The Algorithm Design Manual"\,   "The Data Science Design Manual
 "\, and "Who's Bigger: Where Historical Figures Really Rank"\, and over 1
 50 technical papers. Skiena received his B.S. in Computer Science from t
 he University of Virginia and his Ph.D. in Computer Science from the Unive
 rsity of Illinois under Herbert Edelsbrunner in 1988.   He is a Fellow
  of the American Association for the Advancement of Science (AAAS)\, a cur
 rent and former Fulbright scholar\, and recipient of the University of Vir
 ginia Engineering Distinguished Alumni Award (WahooWa!)\, the ONR Young 
 Investigator Award and the IEEE Computer Science and Engineer Teaching Aw
 ard.   More info is available at http://www.cs.stonybrook.edu/~skiena/.
LOCATION:Raiffeisen Lecture Hall\, ISTA
ORGANIZER:arinya.eller@ist.ac.at
SUMMARY:Steven Skiena: Word and graph embeddings for machine learning
URL:https://talks-calendar.ista.ac.at/events/4167
END:VEVENT
END:VCALENDAR
