Migrating from WebGL to WebGPU

Written by dmitrii | Published 2023/12/20
Tech Story Tags: game-development | webgl | webgpu-vs-webgl | webgpu-to-webgpu | webgpu-timeline | javascript-apis | graphics-rendering | hackernoon-top-story | hackernoon-es | hackernoon-hi | hackernoon-zh | hackernoon-fr | hackernoon-bn | hackernoon-ru | hackernoon-vi | hackernoon-pt | hackernoon-ja | hackernoon-de | hackernoon-ko | hackernoon-tr

TLDRThis guide elucidates the transition from WebGL to WebGPU, covering key differences, high-level concepts, and practical tips. As WebGPU emerges as the future of web graphics, this article offers invaluable insights for software engineers and project managers alike.via the TL;DR App

Moving to the upcoming WebGPU means more than just switching graphics APIs. It's also a step towards the future of web graphics. But this migration will turn out better with preparation and understanding — and this article will get you ready.

Hello everyone, my name is Dmitrii Ivashchenko and I'm a software engineer at MY.GAMES. In this article, we'll discuss the differences between WebGL and the upcoming WebGPU, and we'll lay out how to prepare your project for migration.

Content Overview

  1. Timeline of WebGL and WebGPU

  2. The current state of WebGPU, and what's to come

  3. High-level Conceptual Differences

  4. Initialization

    • WebGL: The Context Model

    • WebGPU: The Device Model

  5. Programs and Pipelines

    • WebGL: Program

    • WebGPU: Pipeline

  6. Uniforms

    • Uniforms in WebGL 1

    • Uniforms in WebGL 2

    • Uniforms in WebGPU

  7. Shaders

    • Shader Language: GLSL vs WGSL

    • Comparison of Data Types

    • Structures

    • Function Declarations

    • Built-in functions

    • Shader Conversion

  8. Convention Differences

  9. Textures

    • Viewport Space

    • Clip Spaces

  10. WebGPU Tips & Tricks

    • Minimize the number of pipelines you use.

    • Create pipelines in advance

    • Use RenderBundles

  11. Summary

Timeline of WebGL and WebGPU

WebGL, like many other web technologies, has roots that stretch back quite far into the past. To understand the dynamics and motivation behind the move towards WebGPU, it's helpful to first take a quick look at the history of WebGL development:

  • OpenGL desktop (1993) The desktop version of OpenGL debuts.
  • WebGL 1.0 (2011): This was the first stable release of WebGL, based on OpenGL ES 2.0, which was itself introduced in 2007. It provided web developers with the ability to use 3D graphics directly in browsers, without the need for additional plugins.
  • WebGL 2.0 (2017): Introduced six years after the first version, WebGL 2.0 was based on OpenGL ES 3.0 (2012). This version brought with it a number of improvements and new capabilities, making 3D graphics on the web even more powerful.

In recent years, there has been a surge of interest in new graphics APIs that provide developers with more control and flexibility:

  • Vulkan (2016): Created by the Khronos group, this cross-platform API is the "successor" to OpenGL. Vulkan provides lower-level access to graphics hardware resources, allowing for high-performance applications with better control over graphics hardware.
  • D3D12 (2015): This API was created by Microsoft and is exclusively for Windows and Xbox. D3D12 is the successor to D3D10/11 and provides developers with deeper control over graphics resources.
  • Metal (2014): Created by Apple, Metal is an exclusive API for Apple devices. It was designed with maximum performance on Apple hardware in mind.

The current state of WebGPU, and what's to come

Today, WebGPU is available on multiple platforms such as Windows, Mac, and ChromeOS through the Google Chrome and Microsoft Edge browsers, starting with version 113. Support for Linux and Android is expected in the near future.

Here are some of the engines that already support (or offer experimental support) for WebGPU:

  • Babylon JS: Full support for WebGPU.
  • ThreeJS: Experimental support at the moment.
  • PlayCanvas: In development, but with very promising prospects.
  • Unity: Very early and experimental WebGPU support was announced in version 2023.2 alpha.
  • Cocos Creator 3.6.2: Officially supports WebGPU, making it one of the pioneers in this area.
  • Construct: currently supported in v113+ for Windows, macOS, and ChromeOS only.

Considering this, transitioning to WebGPU or at least preparing projects for such a transition appears to be a timely step in the near future.

High-level Conceptual Differences

Let's zoom out and take a look at some of the high-level conceptual differences between WebGL and WebGPU, starting with initialization.

Initialization

When starting to work with graphics APIs, one of the first steps is to initialize the main object for interaction. This process differs between WebGL and WebGPU, with some peculiarities for both systems.

WebGL: The Context Model

In WebGL, this object is known as "context", which essentially represents an interface for drawing on an HTML5 canvas element. Obtaining this context is quite simple:

const gl = canvas.getContext('webgl');

The context of WebGL is actually tied to a specific canvas. This means that if you need to render on multiple canvases, you will need multiple contexts.

WebGPU: The Device Model

WebGPU introduces a new concept called "device". This device represents a GPU abstraction that you will interact with. The initialization process is a bit more complex than in WebGL, but it provides more flexibility:

const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();

const context = canvas.getContext('webgpu');
context.configure({
   device,
   format: 'bgra8unorm',
});

One of the advantages of this model is that one device can render on multiple canvases or even none. This provides additional flexibility; for example, one device may control rendering in multiple windows or contexts.

Programs and Pipelines

WebGL and WebGPU represent different approaches to managing and organizing the graphics pipeline.

WebGL: Program

In WebGL, the main focus is on the shader program. The program combines vertex and fragment shaders, defining how vertices should be transformed and how each pixel should be colored.

const program = gl.createProgram();
gl.attachShader(program, vertShader);
gl.attachShader(program, fragShader);
gl.bindAttribLocation(program, 'position', 0);
gl.linkProgram(program);

Steps for creating a program in WebGL:

  1. Creating Shaders: The source code for shaders is written and compiled.
  2. Creating Program: Compiled shaders are attached to the program and then linked.
  3. Using Program: The program is activated before rendering.
  4. Data Transmission: Data is transmitted to the activated program.

This process allows for flexible graphics control, but can also be complex and prone to errors, especially for large and complex projects.

WebGPU: Pipeline

WebGPU introduces the concept of a "pipeline" instead of a separate program. This pipeline combines not only shaders but also other information, which in WebGL, is established as states. So, creating a pipeline in WebGPU looks more complex:

 const pipeline = device.createRenderPipeline({
 layout: 'auto',
 vertex: {
   module: shaderModule, entryPoint: 'vertexMain',
   buffers: [{
     arrayStride: 12,
     attributes: [{
       shaderLocation: 0, offset: 0, format: 'float32x3'
     }]
   }],
 },
 fragment: {
   module: shaderModule, entryPoint: 'fragmentMain',
   targets: [{ format, }],
 },
});

Steps to create a pipeline in WebGPU:

  1. Shader definition: The shader source code is written and compiled, similar to how it's done in WebGL.
  2. Pipeline creation: Shaders and other rendering parameters are combined into a pipeline.
  3. Pipeline usage: The pipeline is activated before rendering.

While WebGL separates each aspect of rendering, WebGPU tries to encapsulate more aspects into a single object, making the system more modular and flexible. Instead of separately managing shaders and rendering states, as is done in WebGL, WebGPU combines everything into one pipeline object. This makes the process more predictable and less prone to errors:

Uniforms

Uniform variables provide constant data that is available to all shader instances.

Uniforms in WebGL 1

In basic WebGL, we have the ability to set uniform variables directly through API calls.

GLSL:

uniform vec3 u_LightPos;
uniform vec3 u_LightDir;
uniform vec3 u_LightColor;

JavaScript:

const location = gl.getUniformLocation(p, "u_LightPos");
gl.uniform3fv(location, [100, 300, 500]);

This method is simple, but requires multiple API calls for each uniform variable.

Uniforms in WebGL 2

With the arrival of WebGL 2, we now have the ability to group uniform variables into buffers. Although you can still use separate uniform shaders, a better option is to group different uniforms into a larger structure using uniform buffers. Then you send all this uniform data to the GPU at once, similar to how you can load a vertex buffer in WebGL 1. This has several performance advantages, such as reducing API calls and being closer to how modern GPUs work.

GLSL:

layout(std140) uniform ub_Params {
   vec4 u_LightPos;
   vec4 u_LightDir;
   vec4 u_LightColor;
};

JavaScript:

gl.bindBufferBase(gl.UNIFORM_BUFFER, 1, gl.createBuffer());

To bind subsets of a large uniform buffer in WebGL 2, you can use a special API call known as bindBufferRange. In WebGPU, there is something similar called dynamic uniform buffer offsets where you can pass a list of offsets when calling the setBindGroup API.

Uniforms in WebGPU

WebGPU offers us an even better method. In this context, individual uniform variables are no longer supported, and work is done exclusively through uniform buffers.

WGSL:

[[block]] struct Params {
   u_LightPos : vec4<f32>;
   u_LightColor : vec4<f32>;
   u_LightDirection : vec4<f32>;
};
[[group(0), binding(0)]] var<uniform> ub_Params : Params;

JavaScript:

const buffer = device.createBuffer({
  usage: GPUBufferUsage.UNIFORM,
  size: 8
});

Modern GPUs prefer data to be loaded in one large block, rather than many small ones. Instead of recreating and rebinding small buffers each time, consider creating one large buffer and using different parts of it for different draw calls. This approach can significantly increase performance.

WebGL is more imperative, resetting global state with each call, and striving to be as simple as possible. WebGPU, on the other hand, aims to be more object-oriented and focused on resource reuse, which leads to efficiency.

Transitioning from WebGL to WebGPU may seem difficult due to differences in methods. However, starting with a transition to WebGL 2 as an intermediate step can simplify your life.

Shaders

Migrating from WebGL to WebGPU requires changes not only in the API, but also in shaders. The WGSL specification is designed to make this transition smooth and intuitive, while maintaining efficiency and performance for modern GPUs.

Shader Language: GLSL vs WGSL

WGSL is designed to be a bridge between WebGPU and native graphics APIs. Compared to GLSL, WGSL looks a bit more verbose, but the structure remains familiar.

Here's an example shader for texture:

GLSL:

sampler2D myTexture;
varying vec2 vTexCoord;
void main() {
  return texture(myTexture, vTexCoord);
}

WGSL:

[[group(0), binding(0)]] var mySampler: sampler;
[[group(0), binding(1)]] var myTexture: texture_2d<f32>;
[[stage(fragment)]]
fn main([[location(0)]] vTexCoord: vec2<f32>) -> [[location(0)]] vec4<f32> {
  return textureSample(myTexture, mySampler, vTexCoord);
}

Comparison of Data Types

The table below shows a comparison of the basic and matrix data types in GLSL and WGSL:

Transitioning from GLSL to WGSL demonstrates a desire for stricter typing and explicit definition of data sizes, which can improve code readability and reduce the likelihood of errors.

Structures

The syntax for declaring structures has also changed:

GLSL:

struct Light {
  vec3 position;
  vec4 color;
  float attenuation;
  vec3 direction;
  float innerAngle;
  float angle;
  float range;
};

WGSL:

struct Light {
  position: vec3<f32>,
  color: vec4<f32>,
  attenuation: f32,
  direction: vec3<f32>,
  innerAngle: f32,
  angle: f32,
  range: f32,
};

Introducing explicit syntax for declaring fields in WGSL structures emphasizes the desire for greater clarity and simplifies understanding of data structures in shaders.

Function Declarations

GLSL:

float saturate(float x) {
	return clamp(x, 0.0, 1.0);
}

WGSL:

fn saturate(x: f32) -> f32 {
  return clamp(x, 0.0, 1.0);
}

Changing the syntax of functions in WGSL reflects the unification of the approach to declarations and return values, making the code more consistent and predictable.

Built-in functions

In WGSL, many built-in GLSL functions have been renamed or replaced. For example:

Renaming built-in functions in WGSL not only simplifies their names, but also makes them more intuitive, which can facilitate the transition process for developers familiar with other graphics APIs.

Shader Conversion

For those who are planning to convert their projects from WebGL to WebGPU, it's important to know that there are tools for automatically converting GLSL to WGSL, such as **[Naga](https://github.com/gfx-rs/naga/)**, which is a Rust library for converting GLSL to WGSL. It can even work right in your browser with the help of WebAssembly.

Here are Naga supported end-points:

Convention Differences

Textures

After migration, you may encounter a surprise in the form of flipped images. Those who have ever ported applications from OpenGL to Direct3D (or vice versa) have already faced this classic problem.

In the context of OpenGL and WebGL, textures are usually loaded in such a way that the starting pixel corresponds to the bottom left corner. However, in practice, many developers load images starting from the top left corner, which leads to the flipped image error. Nevertheless, this error can be compensated by other factors, ultimately leveling out the problem.

Unlike OpenGL, systems such as Direct3D and Metal traditionally use the upper-left corner as the starting point for textures. Considering that this approach seems to be the most intuitive for many developers, the creators of WebGPU decided to follow this practice.

Viewport Space

If your WebGL code selects pixels from the frame buffer, be prepared for the fact that WebGPU uses a different coordinate system. You may need to apply a simple "y = 1.0 - y" operation to correct the coordinates.

Clip Spaces

When a developer faces a problem where objects are clipped or disappear earlier than expected, this is often related to differences in the depth domain. There's a difference between WebGL and WebGPU in how they define the depth range of the clip space. While WebGL uses a range from -1 to 1, WebGPU uses a range from 0 to 1, similar to other graphics APIs such as Direct3D, Metal, and Vulkan. This decision was made due to several advantages of using a range from 0 to 1 that were identified while working with other graphics APIs.

The main responsibility for transforming your model's positions into clip space lies with the projection matrix. The simplest way to adapt your code is to ensure that your projection matrix outputs results in the range of 0 to 1. For those using libraries such as gl-matrix, there is a simple solution: instead of using the perspective function, you can use perspectiveZO; similar functions are available for other matrix operations.

if (webGPU) {
	// Creates a matrix for a symetric perspective-view frustum
  // using left-handed coordinates
  mat4.perspectiveZO(out, Math.PI / 4, ...);
} else {
  // Creates a matrix for a symetric perspective-view frustum
  // based on the default handedness and default near
  // and far clip planes definition.
  mat4.perspective(out, Math.PI / 4, …);
}

However, sometimes you may have an existing projection matrix and you can't change its source. In this case, to transform it into a range from 0 to 1, you can pre-multiply your projection matrix by another matrix that corrects the depth range.

WebGPU Tips & Tricks

Now, let's discuss some tips and tricks for working with WebGPU.

Minimize the number of pipelines you use.

The more pipelines you use, the more state switching you have, and the less performance; this may not be trivial, depending on where your assets come from.

Create pipelines in advance

Creating a pipeline and using it immediately may work, but this is not recommended. Instead, create functions that return immediately and start working on a different thread. When you use the pipeline, the execution queue needs to wait for pending pipeline creations to finish. This can result in significant performance issues. To avoid this, make sure to leave some time between creating the pipeline and first using it.

Or, even better, use the create*PipelineAsync variants! The promise resolves when the pipeline is ready to use, without any stalling.

device.createComputePipelineAsync({
 compute: {
   module: shaderModule,
   entryPoint: 'computeMain'
 }
}).then((pipeline) => {
  const commandEncoder = device.createCommandEncoder();
  const passEncoder = commandEncoder.beginComputePass();
  passEncoder.setPipeline(pipeline);
  passEncoder.setBindGroup(0, bindGroup);
  passEncoder.dispatchWorkgroups(128);
  passEncoder.end();
  device.queue.submit([commandEncoder.finish()]);
});

Use RenderBundles

Render bundles are pre-recorded, partial, reusable render passes. They can contain most rendering commands (except for things like setting the viewport) and can be "replayed" as part of an actual render pass later on.

const renderPass = encoder.beginRenderPass(descriptor);

renderPass.setPipeline(renderPipeline);
renderPass.draw(3);

renderPass.executeBundles([renderBundle]);

renderPass.setPipeline(renderPipeline);
renderPass.draw(3);

renderPass.end();

Render bundles can be executed alongside regular render pass commands. The render pass state is reset to defaults before and after every bundle execution. This is primarily done to reduce the JavaScript overhead of drawing. GPU performance remains the same regardless of the approach.

Summary

Transitioning to WebGPU means more than just switching graphics APIs. It's also a step towards the future of web graphics, combining successful features and practices from various graphics APIs. This migration requires a thorough understanding of technical and philosophical changes, but the benefits are significant.

Useful Resources & Links:


Written by dmitrii | Crafting mobile games and robust backend systems for over a decade
Published by HackerNoon on 2023/12/20