LLVM 10 bolsters Wasm, C/C++, and TensorFlow

Posted on 24-03-2020 , by: admin , in , 0 Comments

LLVM 10, an upgrade of the open source compiler framework behind a number of language runtimes and toolchains, is available today after a number of delays.

The biggest addition to LLVM 10 is support for MLIR, a sublanguage that compiles to LLVM’s internal language and is used by projects like TensorFlow to efficiently represent how data and instructions are handled. Accelerating TensorFlow with LLVM directly is clumsy; MLIR provides more useful programming metaphors for such projects.

The MLIR project has already borne fruit—not only in projects like TensorFlow, but also in projects like Google’s IREE, a way to use the Vulkan graphics framework to accelerate machine learning on GPUs.

Another key addition to LLVM 10 is broader support for WebAssembly, or Wasm. LLVM has supported Wasm as a compilation target for some time now, allowing code written in any LLVM-friendly language to be compiled and run directly in a web browser. The additions for Wasm support include thread-local storage and improved SIMD support. C/C++ code compiled to Wasm using Clang (which uses LLVM) will now use the wasm-opt utility, if present, to reduce the size of the generated code.

Since LLVM is the back end for the Clang C/C++ compiler project, many LLVM 10 features enhance support for those languages. A number of C++20 features, like concepts, have landed in LLVM 10, although the full standard isn’t quite supported yet.

Clang has also bulked up on support for OpenMP 5.0 features, such as range-based loops and unified shared memory for Parallel Thread Execution (PTX) in Nvidia’s CUDA. Thus developers can use LLVM to generate code that exploits these features instead of having to hand-roll them with generated assembly.

Most every LLVM release broadens the variety and depth of LLVM’s processor support. Among the big winners in LLVM 10 is IBM hardware, with z15 processor support added to the mix and existing support for the Power processors enhanced. Power CPUs can now make use of the IBM MASS library for vectorized operations, a project akin to Intel’s Math Kernel Library.