JuliaCon 2025

Reviving OpenCL.jl for CPU glory
2025-07-25 , Main Room 5

OpenCL.jl is one of the oldest GPU programming packages in Julia. We recently revived this package, integrating it with the JuliaGPU ecosystem and enabling native compilation of Julia code through SPIR-V. This allows programming modern OpenCL accelerators, including CPUs through the PoCL library. The end result is a high-quality CPU backend for KernelAbstractions.jl that outperforms the existing tasks-based implementation.


This talk will explore the revival of OpenCL.jl, one of Julia's oldest GPU programming packages. We'll detail how we modernized and integrated it with the JuliaGPU stack by implementing the GPUArrays.jl interfaces, developing a SPIR-V compiler based on GPUCompiler.jl, and adding KernelAbstractions.jl support. These changes enable programming modern OpenCL accelerators using native Julia code, similar to CUDA.jl or AMDGPU.jl.

Building on this foundation, we enhanced support for the Portable OpenCL (PoCL) library as a backend for OpenCL.jl. PoCL's key feature is CPU execution of OpenCL kernels, leveraging multithreading and SIMD instructions for acceleration. This functionality is now readily available through the pocl_jll package and integrates seamlessly with OpenCL.jl.

The culmination of this work is a new CPU backend for KernelAbstractions.jl that combines PoCL's CPU capabilities with GPUCompiler.jl's SPIR-V code generation. This backend addresses the limitations of the current Julia tasks-based implementation, significantly improving both the portability and performance of KernelAbstractions.jl. We'll demonstrate these improvements through practical examples and benchmarks.

Tim Besard is a software engineer at JuliaHub, where he leads GPU support and development for the Julia programming language. He holds a Ph.D. in computer science engineering from Ghent University, Belgium, and has been a key contributor to Julia's GPU ecosystem since 2014. Tim maintains several foundational GPU packages including CUDA.jl, GPUArrays.jl, GPUCompiler.jl, and LLVM.jl, which together form the backbone of GPU computing in Julia.