BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Chicago
X-LIC-LOCATION:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181221T160904Z
LOCATION:C2/3/4 Ballroom
DTSTART;TZID=America/Chicago:20181115T083000
DTEND;TZID=America/Chicago:20181115T170000
UID:submissions.supercomputing.org_SC18_sess324_post172@linklings.com
SUMMARY:MLModelScope: Evaluate and Measure Machine Learning Models within 
 AI Pipelines
DESCRIPTION:Poster\nTech Program Reg Pass, Exhibits Reg Pass\n\nMLModelSco
 pe: Evaluate and Measure Machine Learning Models within AI Pipelines\n\nDa
 kkak, Li, Hwu, Xiong\n\nThe current landscape of Machine Learning (ML) and
  Deep Learning (DL) is rife with non-uniform frameworks, models, and syste
 m stacks but lacks standard tools to facilitate the evaluation and measure
 ment of models. Due to the absence of such tools, the current practice for
  evaluating and comparing the benefits of proposed AI innovations (be it h
 ardware or software) on end-to-end AI pipelines is both arduous and error 
 prone — stifling the adoption of the innovations.  We propose MLModelScope
 — a hardware/software agnostic platform to facilitate the evaluation, meas
 urement, and introspection of ML models within AI pipelines. MLModelScope 
 aids application developers in discovering and experimenting with models, 
 data scientists developers in replicating and evaluating for publishing mo
 dels, and system architects in understanding the performance of AI workloa
 ds.
URL:https://sc18.supercomputing.org/presentation/?id=post172&sess=sess324
END:VEVENT
END:VCALENDAR

