Stream HPC

Comparing Syntax for CUDA, OpenCL and HiP

Both CUDA and OpenCL are well-known GPGPU-languages. Unfortunately there are some slight differences between the languages, which are shown below.

You might have heard of HiP, the language that AMD made to support both modern AMD Fiji GPUs and CUDA-devices. CUDA can be (mostly automatically) translated to HiP and from that moment your code also supports AMD high-end devices.

To give an overview how HiP compares to other APIs, Ben Sanders made an overview. Below you’ll find the table for CUDA, OpenCL and HiP, slightly altered to be more complete. The languages HC and C++AMP can be found in the original.

Term CUDA OpenCL HiP     Device int deviceId cl\_device int deviceId   Queue cudaStream\_t cl\_command\_queue hipStream\_t   Event cudaEvent\_t cl\_event hipEvent\_t   Memory void \* cl\_mem void \*   Grid of threads grid NDRange grid   Subgroup of threads block work-group block   Thread thread work-item thread   Scheduled execution warp sub-group *(warp, wavefront, etc)* warp   Thread-index threadIdx.x get\_local\_id(0) hipThreadIdx\_x   Block-index blockIdx.x get\_group\_id(0) hipBlockIdx\_x   Block-dim blockDim.x get\_local\_size(0) hipBlockDim\_x   Grid-dim gridDim.x get\_global\_size(0) hipGridDim\_x   Device Kernel \_\_global\_\_ \_\_kernel \_\_global\_\_   Device Function \_\_device\_\_ *N/A. Implied in device compilation* \_\_device\_\_   Host Function \_\_host\_ *(default)* *N/A. Implied in host compilation.* \_\_host\_ *(default)*   Host + Device Function \_\_host\_\_\_\_device\_\_ *N/A.* \_\_host\_\_\_\_device\_\_   Kernel Launch <<< >>> clEnqueueNDRangeKernel hipLaunchKernel   Global Memory \_\_global\_\_ \_\_global \_\_global\_\_   Group Memory \_\_shared\_\_ \_\_local \_\_shared\_\_   Private Memory *(default)* \_\_private *(default)*   Constant \_\_constant\_\_ \_\_constant \_\_constant\_\_   Thread Synchronisation \_\_syncthreads barrier(CLK\_LOCAL\_MEMFENCE) \_\_syncthreads   Atomic Builtins atomicAdd atomic\_add atomicAdd   Precise Math cos(f) cos(f) cos(f)   Fast Math \_\_cos(f) native\_cos(f) \_\_cos(f)   Vector float4 float4 float4   You see that HiP borrowed from CUDA.

The discussion is ofcourse if all alike APIs shouldn’t use the same wordings. A best thing would be to mix for the best, as CUDA’s “shared” is much more clearer than OpenCL’s “local”. OpenCL’s functions on locations and dimensions (get_global_id(0) and such) on the other had, are often more appreciated than what CUDA offers. CUDA’s “<<< >>>” breaks all C/C++ compilers, making it very hard to make a frontend of IDE-plugin.

I hope you found the above useful to better understand the differences between CUDA and OpenCL, but also to see how HiP comes into the picture.