BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Chicago
X-LIC-LOCATION:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181221T160731Z
LOCATION:D168
DTSTART;TZID=America/Chicago:20181116T083000
DTEND;TZID=America/Chicago:20181116T083100
UID:submissions.supercomputing.org_SC18_sess144_wksp108@linklings.com
SUMMARY:Introduction - PAW-ATM: Parallel Applications Workshop - Alternati
 ves to MPI
DESCRIPTION:Workshop\nParallel Programming Languages, Libraries, and Model
 s, Productivity, Workshop Reg Pass\n\nIntroduction - PAW-ATM: Parallel App
 lications Workshop - Alternatives to MPI\n\nMorris, Chamberlain, Filippone
 , Iancu\n\nThe increasing complexity in heterogeneous and hierarchical par
 allel architectures and technologies has put a stronger emphasis on the ne
 ed for more effective parallel programming techniques. Traditional low-lev
 el approaches place a greater burden on application developers who must us
 e a mix of distinct programming models (MPI, CUDA, OpenMP, etc.) in order 
 to fully exploit the performance of a particular machine. The lack of a un
 ifying parallel programming model that can fully leverage all the availabl
 e hardware technologies affects not only the portability and scalability o
 f applications but also the overall productivity of software developers an
 d the maintenance costs of HPC applications. In contrast, high-level paral
 lel programming models have been developed to abstract implementation deta
 ils away from the programmer, delegating them to the compiler, runtime sys
 tem, and OS. Such alternatives to traditional MPI+X programming include pa
 rallel programming languages (Chapel, Fortran, UPC, Julia), systems for la
 rge-scale data processing and analytics (Spark, Tensorflow, Dask), and fra
 meworks and libraries that extend existing languages (Charm++, Unified Par
 allel C++ (UPC++), Coarray C++, HPX, Legion, Global Arrays).  While there 
 are tremendous differences between these approaches, all strive to support
  better programmer abstractions for concerns such as data parallelism, tas
 k parallelism, dynamic load balancing, and data placement across the memor
 y hierarchy.\n\nThis workshop will bring together applications experts wh
 o will present concrete practical examples of using such alternatives to M
 PI in order to illustrate the benefits of high-level approaches to scalabl
 e programming.
URL:https://sc18.supercomputing.org/presentation/?id=wksp108&sess=sess144
END:VEVENT
END:VCALENDAR

