Here is the latest from Julia Computing
BL
G

Julia Computing Brings Support for NVIDIA GPU Computing on Arm Powered Servers

3 December 2019 | Julia Computing, NVIDIA

Julia and GPUs

GPUs are increasingly becoming the computing device of choice for many scientific computing and machine learning workflows. As these workloads shift, so too must the software used to program them. Support for native GPU computing has been available in the Julia programming language for many years, but with the release of Julia 1.0 last year has finally reached stability, paving the way for widespread use. Unlike many other programming languages, Julia exposes not only high-level access to GPU-accelerated array primitives (such as matrix multiply, Fourier transform or convolve), but also allows developers to write custom GPU kernels, taking advantage of the full power and flexibility of the underlying hardware without switching languages. This ability also allows developers to easily re-use and move code from CPU-based applications to GPUs, lowering the barrier to entry and accelerating the time to solution.

One system, many applications

Like Julia itself, Julia’s support for NVIDIA GPUs can be used for an impressive variety of different applications from machine learning to climate change. Modern machine learning would be unimaginable without the computational power of GPUs. As such, users of the Flux.jl machine learning library for Julia can take automatic advantage of NVIDIA GPUs with a one line change, without any additional code modification. In addition, Julia’s differentiable programming support is fully GPU-compatible providing GPU acceleration for models at the cutting edge of machine learning research, scaling from a single user with a GPU in their laptop to thousands of GPUs on the world’s largest supercomputers.

Of course, use of Julia on NVIDIA GPUs is much broader than just machine learning. Pumas AI uses Julia’s GPU support to compute personalized drug dosing regimes, using the DifferentialEquations.jl suite of solvers - probably the most comprehensive suite of differential equations solvers in any language. Since GPUs are a native target for Julia, running these solvers on GPUs requires minimal modifications.

The same story played out in a port of a massively parallel multi-GPU solver for spontaneous nonlinear multi-physics flow localization in 3D by Stanford University and the Swiss National Supercomputing Center, work presented at JuliaCon 2019 earlier this year. In this instance, Julia replaced a legacy system written in MATLAB and CUDA C, solving the “two language problem” by allowing both high-level code and GPU kernels to be expressed in the same language and share the same code base.

Trulli

NVIDIA Arm Server Reference Design Platform

Additionally, Julia was selected by the Climate Modeling Alliance as the sole implementation language for their next-generation global climate model. This multi-million dollar project aims to build an earth-scale climate model providing insight into the effects and challenges of climate change. For such a massive task, both productivity and first class performance are non-negotiable requirements for the implementation programming language and after extensive evaluation, the CliMA project leaders have selected Julia as the only system capable of delivering both.

Julia NGC container now available

To further promote the use of Julia on NVIDIA GPUs, Julia Computing is excited to announce the availability of the Julia programming language as a pre-packaged container on the NVIDIA GPU Cloud (NGC) container registry, making it easy to rapidly deploy Julia-based GPU-accelerated applications. NGC offers a comprehensive catalog of GPU-accelerated software for deep learning, machine learning, and HPC. NGC enables users to focus on building lean models, producing optimal solutions and gathering faster insights by optimizing the underlying architecture, aligning code to compute resources, and optimizing memory usage.

Additionally, Julia Computing is pleased to announce that Julia just works out-of-the box with the early access preview CUDA stack for Arm server systems, allowing Julia users to take advantage of NVIDIA GPU acceleration independent of the underlying CPU architecture.

In June of 2019 at the International Supercomputing (ISC) conference, NVIDIA announced its intent to deliver a complete CUDA software stack for Arm, and at this year’s North American Supercomputing conference (SC19), NVIDIA delivered on the promise by jumpstarting the HPC developer tools ecosystem. Julia support for this software stack will be available with NVIDIA’s formal release of CUDA for Arm platform support. Julia users intending to develop for the Arm platform should be able to run their existing applications without modification. The Arm platform is rapidly evolving into a full-scale HPC platform as various building blocks from storage to networking to GPU are now coming online.

Conclusion

Julia’s native support for NVIDIA GPUs is one of the easiest ways to get started with GPU programming. With the addition of Arm server platform support in NVIDIA CUDA, developers are able to enjoy a broader choice of deployment platforms to choose from. Learn more by reading the related Julia documentation, and get started by trying the Julia NGC container today. We’re looking forward to seeing what you’ll build.

Recent posts

Eindhoven Julia Meetup
06 March 2023 | JuliaHub
Newsletter February 2023 - JuliaCon Tickets Available!
21 February 2023 | JuliaHub
11 Years of Julia - Happy Valentine's Day!
14 February 2023 | JuliaHub