AMD CUDA Compiler/Compatibility Layer Announced with the Boltzmann Initiative

Jeff Williams Comments

AMD is very much a big proponent of OpenCL as their compute language of choice, one that's versatile if a bit less elegant in both the development environment as well as the language compared to their competitors proprietary solution. Now AMD has created a compatibility layer with CUDA that fits within their new compiler, the Heterogeneous Compute Compiler (HCC).

AMD's HIP tool can help to simplify the process of moving from using CUDA to OpenCL.

AMD is certainly aware of the advantages that CUDA currently holds in the high performance compute world, being a very versatile compute language. So, within their new compiler, the HCC, they've added something known as the Heterogeneous-Compute Interface for Portability, or HIP, tool that can port CUDA runtime API's directly into C++. The port being into C++ which is much easier and more accessible for most programmers compared to OpenCL or CUDA. This should open up parallel programming to a broader audience of developers and increase code portability across GPU platforms from different vendors.

"AMD testing shows that in many cases 90 percent or more of CUDA code can be automatically converted into C++ by HIP with the final 10 percent converted manually in the widely popular C++ language."

This is an effort on AMD's part to help capture a very large part of the HPC market, which tends towards using CUDA. Though the cross-compilation of code will not always be perfect, it can help provide a basis that's much easier to start from if you're not too familiar in C++, yet want to try your hand at OpenCL programming.

It's important to note that AMD GPU's won't be running fully compiled CUDA code, and this doesn't allow that capability in the slightest. It's a compatibility layer that lets developers write code in the way that they're used to, to help with the transition to AMD GPU's. AMD also doesn't have the necessary licenses to use CUDA outright, even though NVIDIA has opened up to allow third parties to license their technologies in 2013.

From what we've been told, AMD has certainly looked extensively into the issue, though as of right now no license terms are being pursued. Google is the example by which they are following, as they have also taken some initiative in porting CUDA with their own internal compiler, GPUCC, into LLVM and have succeeded in creating code,that also happens to outperform NVIDIA's own NVCC compiled code. They've done this all without a license from NVIDIA, and in an effort to create an open-source CUDA compiler.

What does this mean for the future of compute? It means that CUDA may not necessarily have the monopoly that they once have had. Easy porting and heterogeneous compilation with their new compiler means that CUDA code is no longer such a gated off community like it once was. It's now far more open and accessible. You'd even be able to translate it, with their HCC, into something capable of being run on CPU's, though that isn't advisable depending on the level of parallelism you need. But the fact remains that with efforts from Google and now AMD, that walled-garden might not be so walled after-all.

Deal of the Day