Article Banner
516 views

Ray Marching in Three.js

A simplified approach to rendering SDFs on the web

10 min read2024-05-16

Ray marching is a rendering technique that differs from the standard pipeline in 3D graphics. It is, however, similar to traditional ray tracing with the difference being that we iteratively march the ray towards the objects to find the intersection point.

This iterative nature of ray marching allows it to handle the rendering of complex, mathematically defined surfaces like fractals or clouds: something that cannot be achieved with ray tracing or the standard rasterization approaches.

In this article, we’ll create the basic setup for a simplified 3D scene in three.js of two spheres rendered through ray marching. Below is what our final result will be.

Final Result

If you are interested in the code only, click here. This will take you to a CodePen example.

Also, if you want to watch a video version of this article, check out this video:

The Algorithm

In short, we define a ray for each pixel, then iteratively march our ray in our scene until we find an intersection point, at which point, we calculate the colour for the pixel according to what we have hit.

Pixels and rays

For a live 2D demo, you can check out this CodePen example:

But to simplify the algorithm, we can dissect it into three functions:

  1. The SDF equation of our scene which we will call scene(vec3 p) that will accept a vector3 variable p which represents our current position in the scene. The output will be the shortest distance to the objects in our scene.
  2. The marching loop which we will call rayMarch(vec3 ro, vec3 rd) that will accept a ray origin vector3 and a ray direction vector3 (which we will both calculate for every pixel). This function will call our scene function at every loop to find the current distance we have to the objects in our scene and will stop the execution once we have an intersection point or if we are too far away from the scene. The final output will be the total distance covered.
  3. The main function that will handle the calculation of the ray origin and ray direction and where the rayMarch function will be called. Once we have the total distance marched, we can calculate the intersection position and do various colour and normal calculations there.

We will make the code for these steps later in the article.

A small note about Ray marching Objects

Objects in our scene are not defined with polygonal 3D models as is usually the case with standard rasterization or ray tracing, instead, the objects are defined as a set of signed distance functions that can be compiled together to produce complex scenes.

For example, if we want to render a simple sphere on our screen, we would use the SDF equation:

SDFsphere(RayPos, SpherePos, SphereRadius) {  
  return distance(RayPos, SpherePos)-SphereRadius;  
}

This article, however, is to showcase Ray marching in three.js, if you want to learn more about constructing SDFs, you can check out Inigo Quilez’s article about 3D SDFs.

Three.js Setup

Now that we have a good idea on what ray marching is, we can start on making our three.js scene. We will first start with the usual setup:

// Imports  
import * as THREE from "https://esm.sh/three";  
import { OrbitControls } from "https://esm.sh/three/examples/jsm/controls/OrbitControls";  
  
// Create a scene  
const scene = new THREE.Scene();  
  
// Create a camera  
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);  
camera.position.z = 5;  
  
// Create a renderer  
const renderer = new THREE.WebGLRenderer();  
renderer.setSize(window.innerWidth, window.innerHeight);  
document.body.appendChild(renderer.domElement);  
  
// Set background color  
const backgroundColor = new THREE.Color(0x3399ee);  
renderer.setClearColor(backgroundColor, 1);  
  
// Add orbit controls  
const controls = new OrbitControls(camera, renderer.domElement);  
controls.maxDistance = 10;  
controls.minDistance = 2;  
controls.enableDamping = true;  
  
// Add directional light  
const light = new THREE.DirectionalLight(0xffffff, 1);  
light.position.set(1, 1, 1);  
scene.add(light);

Here we created a basic three.js scene where we added a camera, a renderer, controls (to make movement in our scene easier) and a light.

Converting to a Ray marching scene

With that done, we will need to convert our viewport into a raymarching one. We can do this by making a plane with a custom fragment shader that will be placed exactly on the near plane of our camera.

Three.js raymarching scene diagram

The following is the code for adding this special plane:

// add screen plane  
const geometry = new THREE.PlaneGeometry();  
const material = new THREE.ShaderMaterial();  
const rayMarchPlane = new THREE.Mesh(geometry, material);  
  
// Get the wdith and height of the near plane  
const nearPlaneWidth = camera.near * Math.tan(THREE.MathUtils.degToRad(camera.fov / 2)) * camera.aspect * 2;  
const nearPlaneHeight = nearPlaneWidth / camera.aspect;  
rayMarchPlane.scale.set(nearPlaneWidth, nearPlaneHeight, 1);  
  
scene.add(rayMarchPlane);

And the following is for updating the position of our plane as the camera moves:

// Needed inside update function  
let cameraForwardPos = new THREE.Vector3(0, 0, -1);  
const VECTOR3ZERO = new THREE.Vector3(0, 0, 0);  
  
let time = Date.now();  
  
// Render the scene  
const animate = () => {  
  requestAnimationFrame(animate);  
    
  // Update screen plane position and rotation  
  cameraForwardPos = camera.position.clone().add(camera.getWorldDirection(VECTOR3ZERO).multiplyScalar(camera.near));  
  rayMarchPlane.position.copy(cameraForwardPos);  
  rayMarchPlane.rotation.copy(camera.rotation);  
  
  renderer.render(scene, camera);  
    
  uniforms.u_time.value = (Date.now() - time) / 1000;  
    
  controls.update();  
}  
animate();

Moreover, in case you resize your screen, you can add the following code to resize the plane geometry as well:

// Handle window resize  
window.addEventListener('resize', () => {  
  camera.aspect = window.innerWidth / window.innerHeight;  
  camera.updateProjectionMatrix();  
  
  const nearPlaneWidth = camera.near * Math.tan(THREE.MathUtils.degToRad(camera.fov / 2)) * camera.aspect * 2;  
  const nearPlaneHeight = nearPlaneWidth / camera.aspect;  
  rayMarchPlane.scale.set(nearPlaneWidth, nearPlaneHeight, 1);  
  
  if (renderer) renderer.setSize(window.innerWidth, window.innerHeight);  
});

At this point, you should see a red screen on your canvas. This is because we defined our shader material without a vertex and fragment shader so we will need to do that next.

The Shaders

I will assume that you are familiar with shaders in 3D graphics, but in essence, they are programs that run on the GPU that can affect the final render of our three.js objects.

Generally speaking, an object will have two shader programs attached to its material: a vertex shader that will change the position of the vertices of our model and a fragment shader that will affect the final output colour of the pixels that represent our model.

We can also pass data from our CPU to the GPU by defining uniforms that we can add to our three.js shader material.

Vertex shader

We first will need to define our vertex shader code. This will be pretty straightforward for our purposes as we won’t really do much in this shader.

out vec2 vUv; // to send to fragment shader  
  
void main() {  
    // Compute view direction in world space  
    vec4 worldPos = modelViewMatrix * vec4(position, 1.0);  
    vec3 viewDir = normalize(-worldPos.xyz);  
  
    // Output vertex position  
    gl_Position = projectionMatrix * worldPos;  
  
    vUv = uv;  
}

Uniforms

Before we define our fragment shader, we should pass some data from our CPU to our GPU like camera position, light direction, and other things.

// Add uniforms  
const uniforms = {  
  u_eps: { value: 0.001 },  
  u_maxDis: { value: 1000 },  
  u_maxSteps: { value: 100 },  
  
  u_clearColor: { value: backgroundColor },  
  
  u_camPos: { value: camera.position },  
  u_camToWorldMat: { value: camera.matrixWorld },  
  u_camInvProjMat: { value: camera.projectionMatrixInverse },  
  
  u_lightDir: { value: light.position },  
  u_lightColor: { value: light.color },  
  
  u_diffIntensity: { value: 0.5 },  
  u_specIntensity: { value: 3 },  
  u_ambientIntensity: { value: 0.15 },  
  u_shininess: { value: 16 },  
  
  u_time: { value: 0 },  
};  
material.uniforms = uniforms;

Fragment shader:

Now that we have everything set up, we will need to define our fragment shader for our screen plane which will hold all of our raymarching functions.

However, let’s first start by defining the uniforms inside the fragment shader. This is important as it allows us to use the data passed from the CPU inside of our shader.

precision mediump float;  
  
// From vertex shader  
in vec2 vUv;  
  
// From CPU  
uniform vec3 u_clearColor;  
  
uniform float u_eps;  
uniform float u_maxDis;  
uniform int u_maxSteps;  
  
uniform vec3 u_camPos;  
uniform mat4 u_camToWorldMat;  
uniform mat4 u_camInvProjMat;  
  
uniform vec3 u_lightDir;  
uniform vec3 u_lightColor;  
  
uniform float u_diffIntensity;  
uniform float u_specIntensity;  
uniform float u_ambientIntensity;  
uniform float u_shininess;  
  
uniform float u_time;

One thing to note is where the vUv vec2 came from. This is a vector2 defined inside our vertex shader and passed to our fragment shader in order to identify the UV position of the pixel. In other words, it allows us to know which pixel we are at in our shader which is important for calculating the ray position and direction of each pixel.

Scene function

With that, let’s define our scene function:

float smin(float a, float b, float k) { // smooth min function  
  float h = clamp(0.5 + 0.5 * (b - a) / k, 0.0, 1.0);  
  return mix(b, a, h) - k * h * (1.0 - h);  
}  
  
float scene(vec3 p) {  
  // distance to sphere 1  
  float sphere1Dis = distance(p, vec3(cos(u_time), sin(u_time), 0)) - 1.;  
  
  // distance to sphere 2  
  float sphere2Dis = distance(p, vec3(sin(u_time), cos(u_time), 0)) - 0.75;  
  
  // return the minimum distance between the two spheres smoothed by 0.5  
  return smin(sphere1Dis, sphere2Dis, 0.5);  
}

Our scene will be made of two spheres moving around our scene and merging together as they get closer.

Ray March function

Now for our ray marcher, we can define it as follows:

float rayMarch(vec3 ro, vec3 rd)  
{  
    float d = 0.; // total distance travelled  
    float cd; // current scene distance  
    vec3 p; // current position of ray  
  
    for (int i = 0; i < u_maxSteps; ++i) { // main loop  
        p = ro + d * rd; // calculate new position  
        cd = scene(p); // get scene distance  
          
        // if we have hit anything or our distance is too big, break loop  
        if (cd < u_eps || d >= u_maxDis) break;  
  
        // otherwise, add new scene distance to total distance  
        d += cd;  
    }  
  
    return d; // finally, return scene distance  
}

As explained before, this is where the iterative process of ray marching is done as we first calculate where we are in the scene, if we’ve hit anything or we’re too far away, then we can just return our total distance travelled, and if not, we compound our current distance into our total distance variable and continue our loop.

Colouring functions

Before we start with our main function, we will need some other functions that we will use for colouring, these are:

  1. A scene colour function that gives us the colour according to our hit point. Usually, you can make it mirror our scene function but instead of returning the distance, we return the color based on which object we are closer to:
vec3 sceneCol(vec3 p) {  
  float sphere1Dis = distance(p, vec3(cos(u_time), sin(u_time), 0)) - 1.;  
  float sphere2Dis = distance(p, vec3(sin(u_time), cos(u_time), 0)) - 0.75;  
  
  float k = 0.5; // The same parameter used in the smin function in "scene"  
  float h = clamp(0.5 + 0.5 * (sphere2Dis - sphere1Dis) / k, 0.0, 1.0);  
  
  vec3 color1 = vec3(1, 0, 0); // Red  
  vec3 color2 = vec3(0, 0, 1); // Blue  
  
  return mix(color1, color2, h);  
}
  1. A normal function that returns the normal to the scene according to our hit point. There are many ways to get this function, but Inigo Quilez’s method seems to be the most efficient:
vec3 normal(vec3 p) // from https://iquilezles.org/articles/normalsSDF/  
{  
 vec3 n = vec3(0, 0, 0);  
 vec3 e;  
 for(int i = 0; i < 4; i++) {  
  e = 0.5773 * (2.0 * vec3((((i + 3) >> 1) & 1), ((i >> 1) & 1), (i & 1)) - 1.0);  
  n += e * scene(p + e * u_eps);  
 }  
 return normalize(n);  
}

Main Fragment function

Finally, With these functions, we can define our main fragment function:

void main() {  
    // Get UV from vertex shader  
    vec2 uv = vUv.xy;  
  
    // Get ray origin and direction from camera uniforms  
    vec3 ro = u_camPos;  
    vec3 rd = (u_camInvProjMat * vec4(uv*2.-1., 0, 1)).xyz;  
    rd = (u_camToWorldMat * vec4(rd, 0)).xyz;  
    rd = normalize(rd);  
      
    // Ray marching and find total distance travelled  
    float disTravelled = rayMarch(ro, rd); // use normalized ray  
  
    // Find the hit position  
    vec3 hp = ro + disTravelled * rd;  
      
    // Get normal of hit point  
    vec3 n = normal(hp);  
  
    if (disTravelled >= u_maxDis) { // if ray doesn't hit anything  
        gl_FragColor = vec4(u_clearColor,1);  
    } else { // if ray hits something  
        // Calculate Diffuse model  
        float dotNL = dot(n, u_lightDir);  
        float diff = max(dotNL, 0.0) * u_diffIntensity;  
        float spec = pow(diff, u_shininess) * u_specIntensity;  
        float ambient = u_ambientIntensity;  
          
        vec3 color = u_lightColor * (sceneCol(hp) * (spec + ambient + diff));  
        gl_FragColor = vec4(color,1); // color output  
    }  
}

Here we are calculating the ray origin and direction for the pixel, finding the total distance travelled by calling the ray marching function, and then with the normal and colour information of our hit point, we calculate the final colour for that pixel by using the Blinn-Phong lighting algorithm.

Adding the shaders to our Three.js Scene

To pass these shaders into the rendering plane we defined with three.js, we can simply put our shaders in strings and pass those strings into our shader material.

const vertCode = `  
// ... Place the vertex shader here ... //  
`;  
  
const fragCode = `  
// ... Place the uniforms code first... //  
// ... Then scene, rayMarch, sceneCol, and normal ... //  
// ... Finally, add main ... //  
`  
  
material.vertexShader = vertCode;  
material.fragmentShader = fragCode;

And that’s basically it, you now have a ray marching scene inside your three.js scene.

If you are interested in the code only, you can check out the following CodePen:

Conclusion

We can render a ray marching scene in three.js by setting up a simple plane, fixing that plane into the near plane of the camera, and adding the ray marching code into the fragment shader of that plane.

The CodePen above is an example of doing this in pure JavaScript, but if you are interested in adding this code to your React website, you can check out this live example.

I hope you enjoyed this article!

Cheers!



Raymarching
Threejs
Glsl
Rendering

If you enjoyed this article and would like to support me, consider to

☕️ Buy me coffee ツ