Loading…
For full conference details: http://llvm.org/devmtg/2017-10/
Back To Schedule
Wednesday, October 18 • 10:30am - 12:45pm
Student Research Competition

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.

VPlan + RV: A Proposal 
Simon Moll and Sebastian Hack
The future of automatic vectorization in LLVM lies in Intel's VPlan proposal. The current VPlan patches provide the basic scaffolding for outer loop vectorization. However, the advanced analyses and transformations to execute VPlans are yet missing. 
The Region Vectorizer (RV) is an automatic vectorization framework for LLVM. RV provides a unified interface to vectorize code regions, such as inner and outer loops, up to whole functions. RV's analysis and transformations are designed to create highly efficient SIMD code. These are the exact the analyses and transformations that are needed for VPlan. 
This talk presents a proposal for integrating RV with VPlan.

Polyhedral Value & Memory Analysis 
Johannes Doerfert and Sebastian Hack
Polly, the polyhedral analysis and optimization framework of LLVM, is designed and developed as an --- external --- project. While recently attempts have been made to make the analysis results available to common LLVM passes, the different pass pipelines and the very design of Polly make this an almost impossible task. 
In order to make polyhedral value, memory and dependence analysis information available to LLVM passes we propose the Polyhedral Value Analysis (PVA) and the Polyhedral Memory Analysis (PMA). Both are first class LLVM passes that provide a Scalar Evolution like experience with a polyhedral model backbone. The analyses are demand driven, caching, flow sensitive and variably scoped (aka. optimistic). In addition this approach can easily be extended to an inter-procedural setting.

DLVM: A Compiler Framework for Deep Learning DSLs 
Richard Wei, Vikram Adve and Lane Schwartz
Deep learning software demands performance and reliability. However, many of the current deep learning tools and infrastructures are highly dependent on software libraries that act as a dynamic DSL and a computation graph interpreter. We present DLVM, a design and implementation of a compiler framework that consists of linear algebra operators, automatic differentiation, domain-specific optimizations and a code generator targeting heterogeneous parallel hardware. DLVM is designed to support the development of neural network DSLs, with both AOT and JIT compilation. 
To demonstrate an end-to-end system from neural network DSL, via DLVM, to parallelized execution, we demonstrate NNKit, a typed tagless-final DSL embedded in the Swift programming language that targets DLVM IR. We argue that the DLVM system enables a form of modular, safe and performant toolkits for deep learning.

Leveraging LLVM to Optimize Parallel Programs 
William Moses
LLVM is an effective framework for representing and optimizing programs, both for end-users as well as researchers. When it comes to optimizing or analyzing parallel programs, however, the path forward is far from clear. 
As is the case for most compilers, in Clang/LLVM parallel linguistic constructs (such as those provided by OpenMP or Cilk) are treated as syntactic sugar for closures that are passed to a parallel runtime. This prevents traditional analysis and optimization from interacting with parallel programs. Remedying this situation, however, has generally thought to require an extensive reworking of compiler analyses and code transformations. 
Recently, we have introduced Tapir – an extension to serial compiler IR such as LLVM, that allows the compiler to analyze and optimize parallel tasks with minimal modification. We successfully implemented a prototype compiler on top of LLVM that succeeded in our original goal of permitting serial optimizations to work with parallel code and only had to change roughly 6000 lines out of LLVM’s 3 million. 
This success led us to kickstart “project Rhino,” in which we developed analysis and optimization specifically for parallel code. We were able to derive optimizations that exploit parallel semantics.  For example, we developed code-motion optimizations for parallel codes that are not provably safe for serial code. Unfortunately, in performing these optimizations, we found that vanilla Tapir fell short. There exist optimized forms of parallel programs that can’t be represented by Tapir. Happily, however, we were able to remedy this with a minor modification. 
The work was conducted in collaboration with Tao B. Schardl and Charles E. Leiserson as well as Douglas Kogut, Jiahao Li, and Bojan Serafimov.

Exploiting and improving LLVM's data flow analysis using superoptimizer 
Jubi Taneja, John Regehr
This proposal is about increasing the reach of a superoptimizer to find missing optimizations and make LLVM’s data flow analysis more precise. Superoptimizer usually performs optimizations based only on local information, i.e. it operates on a small set of instructions. To enhance its knowledge for farther program points, we build an interaction between a superoptimizer and LLVM’s data flow analysis. With the global information derived from a compiler’s data flow analysis, the superoptimizer can find more interesting optimizations as it knows much more than just the instruction sequence. Our goal is not limited to exploiting the data flow facts imported from LLVM to help our superoptimizer: "Souper". We also improve the LLVM’s data flow analysis by finding imprecision and making suggestions. It is harder to implement optimizations with path conditions in LLVM compiler. To avoid writing fragile optimization without any additional information, we automatically scan the Souper’s optimizations for path conditions that map into data flow facts already known to LLVM and suggest corresponding optimizations. The interesting set of optimizations found by Souper also resulted in form of patches to improve LLVM’s data flow analysis and some of them are already accepted.


Speakers
avatar for Johannes Doerfert

Johannes Doerfert

Researcher/PhD Student, Saarland University
SM

Simon Moll

Researcher/PhD Student, Saarland University
avatar for William Moses

William Moses

PhD Candidate, MIT
JT

Jubi Taneja

PhD Student, University of Utah
I am a PhD student under Prof. John Regehr. I am working on exploiting and improving LLVM's data flow analysis using superoptimizer. I'm interested in working on finding patterns/generalizations in LLVM optimizations. If you're interested to know about LLVM's DFA, come and attend... Read More →
RW

Richard Wei

University of Illinois at Urbana-Champaign


Wednesday October 18, 2017 10:30am - 12:45pm PDT
1 - General Session (Rm LL20ABCD)