CHIUW 2017
The 4th Annual Chapel Implementers and Users Workshop
Friday June 2, 2017 (mini-conference day)
Saturday June 3, 2017 (code camp half-day)
31st IEEE International Parallel & Distributed Processing Symposium
Buena Vista Palace Hotel, Orlando, Florida, USA
Introduction: CHIUW 2017—the fourth annual Chapel Implementers and Users Workshop, to be held in conjunction with IEEE IPDPS 2017—will continue our annual series of workshops designed to bring developers and users of the Chapel language (chapel-lang.org) together to present and discuss work being done across the broad open-source community. Attendance is open to anyone interested in Chapel, from the most seasoned Chapel user or developer to someone simply curious to learn more.
Registration: Register for CHIUW 2017 via the IPDPS registration site. If you're only attending CHIUW, select a one-day registration. To attend other days at IPDPS, select from the other options as appropriate.
Friday, June 2, 2017 (Mini-Conference Day)
8:30 - 9:00: | Chapel Boot Camp (Optional) [slides] |
Ben Albrecht (Cray Inc.) | |
This is a completely optional session held prior to the official start of the workshop for those who are new to Chapel and looking for a crash-course, or for those who would simply like a refresher. | |
9:00 - 9:30: | Welcome, State of the Project [slides] |
Brad Chamberlain (Cray Inc.) | |
9:30 - 10:00: | Break (catered by IPDPS) |
Session chair: Tom MacDonald |
|
10:00 - 10:20 | Improving Chapel and Array Memory Management [extended abstract | slides] |
Michael Ferguson, Vassily Litvinov, and Brad Chamberlain (Cray Inc.) | |
10:20 - 10:50: | Identifying Use-After-Free Variables in Fire-and-Forget Tasks [slides] |
Jyothi Krishna V S (IIT Madras) and Vassily Litvinov (Cray Inc.) | |
Abstract: Programmers use the 'begin' construct in Chapel to create fire and forget-style tasks, which do not perform any implicit synchronization with the parent task. While this provides a good facility to invoke parallel tasks, it poses issues when the child task accesses a variable declared in the scope of its ancestor. If the parent task exits before the child, its scope is deallocated and the child may end up accessing a memory location that is no longer valid. The child task must synchronize with the parent task to ensure legal access to its variables, for example by means of atomic variables, sync statements, or sync and single synchronization variables. In this work, we address the above issue with a compile-time partial inter-procedural analysis for outer variable accesses in begin tasks to identify and report potentially dangerous accesses. We make use of a Concurrent Control Flow Graph to generate all possible run-time Parallel Program States (PPS). All outer variable accesses that are potentially dangerous in the generated PPS are then reported to the user for rectification. | |
10:50 - 11:10: | Try, Not Halt: An Error Handling Strategy for Chapel [extended abstract | slides] |
Preston Sahabu, Michael Ferguson, Greg Titus, and Kyle Brady (Cray Inc.) | |
11:10 - 11:20: | Break |
Session chair: Elliot Ronaghan |
|
11:20 - 11:40: | GPGPU support in Chapel with the Radeon Open Compute Platform [extended abstract | slides] |
Michael Chu, Ashwin Aji, Daniel Lowell, and Khaled Hamidouche (AMD) | |
11:40 - 12:00: | An OFI libfabric Communication Layer for the Chapel Runtime [extended abstract | slides] |
Sung-Eun Choi (Cray Inc.) | |
12:00 - 1:30: | Lunch (in ad hoc groups or on your own) |
Session chair: Brad Chamberlain |
|
1:30 - 2:30: | Chapel’s Home in the New Landscape of Scientific
Frameworks (and what it can learn from the neighbours) [video | online slides | PDF slides] |
Jonathan Dursi (The Hospital for Sick Children, Toronto) | |
Abstract: The last decade
has seen an explosion of interest and diversity in
large-scale computing. Many scientists are interested in
seizing the opportunities these emerging new platforms,
compiler frameworks, and languages provide, aiming for
bigger simulations, more data, or higher levels of
computational productivity in their research than had been
feasible before.
But with the landscape so quickly changing and so many new tools becoming available, it is hard to know where to focus attention. How is a researcher to choose a language to start a new project in when new possibilities appear so often? There are new languages for scientific computing like Julia, which has linear algebra built in but only modest parallel computing support. There are frameworks like Spark, which is parallel and cluster-enabled from the beginning, but targeted for data analysis rather than simulation. There are old stalwarts like R which grow tendrils into tools like Spark or SCALAPACK; and even new non-number crunching languages like Rust and Swift are starting to have scientific computing adherents. Chapel is a language from Cray that has fields over domains as first-class entities, allowing for efficient and productive parallel domain-decomposed computations. Where does Chapel fit in amongst these new languages; when should a researcher use Chapel, when should they use one of the others, and what ideas and techniques are out there that could Chapel usefully adopt? It’s these questions I'll address in this talk. I’ll give a brief overview of some of these other platforms and languages that are being adopted for different types of scientific computing, and compare them to Chapel in the context of two quite different types of scientific problems - high speed fluid flow, and genomic bioinformatics. We’ll discuss pros and cons, when to use each, and look at what ideas could be poached by Chapel and its community. |
|
Bio: Jonathan Dursi has
over twenty-five years experience using large-scale
computing to advance science. His personal research has
focused on astrophysical fluids with the DOE ASCI ASAP
program and on bioinformatics with the Ontario Institute for
Cancer Research. He has also worked to support other
researchers in Canada's largest HPC Center, SciNet and as
Compute Canada's first CTO. He currently works at Toronto's
Hospital for Sick Children on the CanDIG project, helping
build a platform for national-scale analysis of private
locally-controlled genomics data.
He is very interested in tools that have the potential to make big scientific computing more productive and powerful, and blogs on these topics at http://www.dursi.ca—where his post HPC is Dying and MPI is Killing it gained unexpected notoriety. |
|
Session chair: Nikhil Padmanabhan |
|
2:30 - 3:00: | Towards a GraphBLAS Library in Chapel [slides] |
Ariful Azad and Aydin Buluc (LBNL) | |
Abstract: The adoption of a programming language is positively influenced by the breadth of its software libraries. Chapel is a modern and relatively young parallel programming language. Consequently, not many domain-specific software libraries exist that are written for Chapel. Graph processing is an important domain with many applications in cyber security, energy, social networking, and health. Implementing graph algorithms in the language of linear algebra enables many advantages including rapid development, flexibility, high-performance, and scalability. The GraphBLAS initiative aims to standardize an interface for linear-algebraic primitives for graph computations. This paper presents initial experiences and findings of implementing a subset of important GraphBLAS operations in Chapel. We analyzed the bottlenecks in both shared and distributed memory. We also provided alternative implementations whenever the default implementation lacked performance or scaling. | |
3:00 - 3:20: | Sketching Streams with Chapel [extended abstract | slides | code] |
Christopher Taylor (DoD) | |
3:20 - 3:50: | Break (catered by IPDPS) |
Session chair: Vassily Litvinov |
|
3:50 - 4:20: | Comparative Performance and Optimization of Chapel in Modern Manycore Architectures [slides] |
Engin Kayraklıoğlu, Wo Chang, and Tarek El-Ghazawi (The George Washington University) | |
Abstract: Chapel is an emerging scalable, productive parallel programming language. In this work, we analyze Chapel’s performance using The Parallel Research Kernels on two different manycore architectures including a state-of-the-art Intel Knights Landing processor. We discuss implementation techniques in Chapel and their relation to the OpenMP implementations of the PRK. We also suggest and prototype several optimizations in different layers of the software stack including the Chapel compiler. In our experiments we observed that base performance of Chapel ranges from 41%-184% that of OpenMP. The optimization techniques we discussed shows performance improvements ranging from 1.4x to 2x in Chapel. | |
4:20 - 4:40: | Entering the Fray: Chapel's Computer Language Benchmarks Game Entry [extended abstract | slides] |
Brad Chamberlain, Ben Albrecht, Lydia Duncan, Ben Harshbarger, Elliot Ronaghan, Preston Sahabu, Michael Noakes (Cray Inc.), and Laura Delaney (Whitworth University) | |
Session chair: Michael Ferguson |
|
4:40 - 5:30: | Lightning Talks and Flash Discussions |
This final session featured short (~7 minute) time slots in which community members could sign up on-site or ahead of time to give short talks, lead discussions on current hot topics of interest, or do whatever seemed appropriate to them. In the end, the following talks were given: | |
|
|
5:30 - : | Adjourn for Dinner (in ad hoc groups or on your own) |
Saturday, June 3, 2017 (Code Camp Half-Day)
(room: Tangerine 5) |
|
8:30 - 12:00: | Chapel Code Camp |
The Chapel code camp is an annual chance to
work cooperatively on coding problems or discussion topics
while we're in one place. This year's code camp activities
included:
|
|
12:00 - : | Lunch (in ad hoc groups or on your own) |
General Chairs:
- Tom MacDonald, Cray Inc.
- Michael Ferguson, Cray Inc.
- Brad Chamberlain (chair), Cray Inc.
- Nikhil Padmanabhan (co-chair), Yale University
- Richard Barrett, Sandia National Laboratories
- Mike Chu, AMD
- Mary Hall, University of Utah
- Jeff Hammond, Intel
- Jeff Hollingsworth, University of Maryland
- Cosmin Oancea, University of Copenhagen
- David Richards, Lawrence Livermore National Laboratory
- Michelle Strout, University of Arizona
- Kenjiro Taura, University of Tokyo
Call For Participation (for archival purposes)