Monday February 24, 2020
|Speaker: Josep Torrellas (Saburo Muroga Professor of Computer Science, Univ. of Illinois, Urbana-Champaign)|
|Title: Interdisciplinary Research at a Time of Pervasive Changes|
Abstract: Research in computing systems and, in particular, computer architecture continues to evolve in interesting directions. Industry faces seismic changes and the research community is becoming broader in both interests and size. At the same time, the volume of research funding has remained constant. In this environment, interdisciplinary research is the new game. It is the most impactful and cost-effective. Areas beyond performance and energy efficiency such as programmability, security, and usability are likely to provide the highest potential for us, researchers, to impact the field.
In this talk, I will briefly outline these trends as I see them, and then describe, at a high level, a few of examples of interdisciplinary research efforts. They include programmable non-volatile memories, rethinking secure hardware, and adapting to operating system and virtualization changes. Each of them requires expertise in different areas to make progress.
|Bio: Josep Torrellas is the Saburo Muroga Professor of Computer Science at the University of Illinois at Urbana-Champaign (UIUC). He leads the Center for Programmable Extreme-Scale Computing, which focuses on architectures for extreme energy efficiency, and co-leads the Illinois Intel Strategic Research Alliance Center on Computer Security. He has made contributions to parallel computer architecture in the areas of shared-memory multiprocessor organizations, cache hierarchies and coherence protocols, and thread-level speculation. He is a Fellow of IEEE, ACM, and AAAS. He received an IEEE CS Technical Achievement Award and a UIUC Campus Award for Excellence in Graduate Student Mentoring. He serves in the International Roadmap for Devices and Systems and the U.S. Board on Army Research and Development, and has served in the Board of Directors of CRA and the Council of the Computing Community Consortium (CCC).|
Tuesday February 25, 2020
|Speaker: Michael Garland (NVIDIA Research)|
|Title: Scaling Parallel Programming Beyond Threads|
|Abstract: Parallel hardware is ubiquitous, as are the essential components of the hardware/software interface necessary to leverage such systems. Low-level mechanisms such as threads, atomic memory operations, and synchronization constructs are available in many mainstream languages, either directly or via libraries such as MPI. These foundations are necessary for building effective parallel programming system, but they do not directly address the needs of programmers whose domain of expertise lies outside parallel programming. Mainstream programming languages are beginning to provide higher-level support for parallelism, such as the parallel algorithm extensions introduced in C++17. However, the needs of many programmers remain unmet. This talk will examine ongoing hardware trends, explore design directions for parallel programming systems that can scale to meet the needs of a broad range of users, and explain some of our recent work to build high-performance, scalable platforms for data science.|
|Bio: Michael Garland is the Senior Director of Programming Systems and Applications research at NVIDIA. He completed his Ph.D. at Carnegie Mellon University, and was previously on the faculty of the Department of Computer Science of the University of Illinois at Urbana-Champaign. He joined NVIDIA in 2006 as one of the first members of NVIDIA Research, and has been working to develop effective parallel programming systems ever since. His research goal is to develop tools and techniques that will equip programmers to realize the full potential of modern, massively parallel, computing systems.|
Wednesday February 26, 2020
|Speaker: Chris Lattner (SiFive) and Tatiana Shpeisman (Google)|
|Title: MLIR Compiler Infrastructure|
|Abstract: This talk will give an overview of MLIR – the “Multi-Level Intermediate Representation” compiler infrastructure, a new addition to the LLVM family of compiler technologies. MLIR provides a unified, flexible and extensible intermediate representation that is application-agnostic and is being quickly adopted for many purposes. MLIR’s design provides significant representational flexibility and great “in the box” tooling, which makes it easy and fast to implement a wide range of compilers and other tools that benefit from representing and transforming structured data in nested hierarchical, dataflow graph, and control flow graph forms. |
This talk frames the problem addressed by MLIR, and discusses its general design and some of the rapidly growing infrastructure it provides. Because a common task is to move existing compilers and systems to MLIR, we discuss what the process of doing this looks like and use LLVM IR as a (hypothetical) example. We then discuss the benefits and opportunities that such a move would provide if it were actually completed.