Recreating Real-World Terrain With React, Three.js & WebGL Shaders
Flight Simulator, Heard Of It?
You might know about Microsoft's Flight Simulator release in August last year and like me, if you're interested in flight simulators or photorealistic recreations of real life places, you might have spent a lot of time staring in awe at the amazing detail crammed into the latest iteration of the series.
Flight Simulator uses satellite scans and terrain data from Bing maps to generate the terrain in real time. I won't go into the details but you can get a gist of what's going on here, but it is safe to say that the results are some of the most accurate recreations of real life places, with some people saying that they could even spot their house if they flew over to their town.
This article is going to help you recreate something similar, albeit in much lower detail and on a much, much smaller scale. However, I think you'll find it's exciting nonetheless.
So let's get started.
WebGL and Three.js
Ever since WebGL brought the immense capabilities of OpenGL to the Web, it made developing 3D environments with complex textures and lighting more accessible via the web, which is arguably the most accessible platform. It gave developers a way to showcase their work on any platform that came with a web browser and supported WebGL. Today, the list of platforms that support WebGL is pretty large and covers a huge proportion of the user base.
Three.js made things even easier for developers by allowing them to write their code in JavaScript and interact with browser APIs like the DOM, audio APIs and WebSockets just to name a few. Three.js still allows you to drop down into the WebGL layer by directly calling its APIs.
More recently, a library called react-three-fiber took this one step further, by providing React developers with a reconciler that would allow them to effortlessly write their Three.js code declaratively using JSX.
Baby steps
We're going to try and recreate a piece of terrain in Three.js and try and make it as photorealistic as possible.
But first, let's get a minimum working app set up so that we have something tangible.
We'll start off by creating a small scene in CodeSandbox. This scene will contain:
- One single square plane in 3d space, that will eventually turn into a piece of terrain.
- A light source so that we can see it.
- And controls, so that we can look around.
Here is a CodeSandbox with our starting point, with details in the code comments:
For a more complete guide to how this code works, you should check out Vikrant's article on our old blog explaining the basics of react-three-fiber
.
The "terrain" is just a green square right now. So, our next steps are going to involve getting the data we need to make it look more earthly.
First let's pick a nice spot on the Earth.
The Location
For this article, I'll be using a real life location. Uluru (also known as Ayers Rock) is a natural rock formation and UNESCO World Heritage Site in central Australia. It is made of sandstone, has a distinctive red-brown color and is surrounded by the arid landscape of the Uluṟu-Kata Tjuṯa National Park. A perfect test location.
It is one of the featured sites on Google Earth.
Beautiful, isn't it? Almost like the surface of Mars. I chose this spot because the terrain has some prominent features that would be interesting to recreate and the variations in terrain height are distinct.
We'll be recreating the terrain in two steps:
- First, we'll modify the flat plane we drew earlier so that it has the shape of the terrain.
- Then, we'll add textures, so that it looks like the real thing.
For both steps, we'll need different kinds of data, so we'll get those before each step.
Mapping The Terrain Height
In many cases, the easiest way to describe something is with a picture. In the case of computer graphics, images can be one of the most efficient forms of encoding data, with the added advantage of allowing humans to easily perceive that data.
The first data we need is the height or altitude mapping of each point on the terrain. One way to do this is to map points on the Earth's surface to pixels in an image, with each pixel representing some data about a point or region on the surface.
The most natural mapping one could think of is: the lighter the pixel, the higher up the point is on the Earth.
Lucky for us, there's a few tools that can give us exactly this. One pretty popular example is terrain.party, which is used by the Cities: Skylines community to generate terrains in-game.
But, we'll be using a different free online tool called Tangrams Heightmapper.
Here's what the heightmap looks like for all of Australia:
We need a specific part of Australia. So, I did the work of zooming into the coordinates for Uluru:
tangrams.github.io/heightmapper/#15.19444/-..
The white part is Uluru, which is higher up while the surroundings, which are almost black, are the ground around the rock formation.
Tangrams does the hard work of making sure that the lowest level (~500m above sea level) is black (#000), while the peak of Uluru (~800m above sea level) is white (#fff), and the rest is linearly mapped in between the two.
After exporting an adequate range of the terrain to a PNG, I scaled and cropped it into a 1024x1024 format using GIMP. It's important for later that the height and width be a power of two.
Here is the final heightmap:
Now we need to add this to the Three.js scene so that our plane starts to take the form of the final terrain.
But first...
A Brief Intro To Shaders
A shader is a sort of a function that decides what a pixel looks like based on several parameters. In WebGL, there are two types of shaders:
- Vertex shaders
- Fragment shaders
Let's talk about these in the context of 3D objects. The surface of any 3D object can be represented as a bunch of polygons. Usually, we use triangles, because they are simple and planar.
In a simplified sense, the vertex shader decides where the vertices of these polygons are rendered in the 3D space, while the fragment shader decides what the space between those vertices will look like.
So the vertex shader will help us use the heightmap to shape our terrain, while the fragment shader will be useful when we need to apply textures.
To do this, we will need to modify our earlier code, since we won't be able to use the same material, (MeshBasicMaterial), that we used in the first sandbox with the green square. Instead, we'll need a special material called ShaderMaterial that will let us pass in our hand crafted shaders.
Applying The heightmap Using A Vertex Shader
WebGL shaders are written in a language called GLSL, which is similar to C++ in some ways, but gets compiled to run directly on the GPU. Each shader can have a main function. The code in this main function is applied to each pixel in the framebuffer on the GPU.
In Three.js, these shaders can be passed into the ShaderMaterial
as a string. As long as the string is a valid GLSL program, everything will work.
We'll make a vertex shader that takes the red component (the R in RGBA) of each pixel of the heightmap and combines it with a scaling factor to decide the height of each point above the X-Z plane (the plane our terrain lies in).
We could have used the blue and green components too, but since everything is in grayscale, those values will be identical to the red component in our heightmap.
We'll also create a rudimentary fragment shader that just makes points that are higher up a lighter shade of green, so that we can see the results rather easily.
Here's what the vertex shader will look like:
// Uniforms are data that are shared between shaders
// The contain data that are uniform across the entire frame.
// The heightmap and scaling constant for each point are uniforms in this respect.
// A uniform to contain the heightmap image
uniform sampler2D bumpTexture;
// A uniform to contain the scaling constant
uniform float bumpScale;
// Varyings are variables whose values are decided in the vertext shader
// But whose values are then needed in the fragment shader
// A variable to store the height of the point
varying float vAmount;
// The UV mapping coordinates of a vertex
varying vec2 vUV;
void main()
{
// The "coordinates" in UV mapping representation
vUV = uv;
// The heightmap data at those coordinates
vec4 bumpData = texture2D(bumpTexture, uv);
// height map is grayscale, so it doesn't matter if you use r, g, or b.
vAmount = bumpData.r;
// move the position along the normal
vec3 newPosition = position + normal * bumpScale * vAmount;
// Compute the position of the vertex using a standard formula
gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
}
Here's the working example:
Cool, huh?
You can find the code for both shaders in shaders.js
.
Try and increase the bumpScale variable in App.js
to exaggerate the heights of everything.
So we've mapped the terrain height to our ShaderMaterial using a height map and careful use of a vertex shader. But it doesn't look like terrain yet.
What we need now, is textures.
Getting The Terrain Textures
We need real textures if we need our terrain to look anything like the real thing.
One of the easiest ways to get photorealistic textures is to get your hands on satellite images.
I took the easy way out and used Mapbox to get a bunch of screenshots of the area around Uluru and merged them into a single image using GIMP.
This was actually pretty time consuming since I needed to be careful that the images were at the right zoom level and that everything would line up with the heightmap.
Since the heightmap and the textures were from different sources, I needed to resize the texture to match the heightmap's zoom level. To help with the positioning, I also needed to use tricks like adding thresholding to the heightmap so that the features would stand out in the heightmap and help me with the positioning.
The final result wasn't bad.
Mapping Terrain Textures
So, we've mapped the terrain height to our ShaderMaterial
using a heightmap and careful use of a vertex shader.
We've also gotten hold of a texture map which is the exact same size and zoom level, and of the same location as the heightmap.
Now we need to create a fragment shader that can map the terrain texture onto our ShaderMaterial
.
In our case, this is surprisingly simple. We just need to read the texture map image and set the color of the corresponding pixel using the fragment shader.
Here is the code for just the fragment shader:
// A uniform fot the terrain texture image
uniform sampler2D terrainTexture;
// Get the varyings from the vertex shader
varying vec2 vUV;
// vAmount isn't really used, but could be if necessary
varying float vAmount;
void main()
{
// Get the color of the fragment from the texture map
// at that coordinate in the UV mapping
gl_FragColor = texture2D(terrainTexture, vUV);
}
Here's the final working example. Scroll to zoom in.
Neat. We've done it! We've created a virtual Uluru with React and Three.js!
Conclusion
In this article, we've managed to set up a basic scene, learned about two types of materials (MeshBasicMaterial
and ShaderMaterial
), shaders, how to customise ShaderMaterial
using shaders and recreate a piece of real world terrain using Three.js and React.
The results are pretty good. There are a few more things we could do, but I'll leave them for another time.
For now, I'll leave you with some helpful resources. Thanks for reading!
Further reading
- A far more in-depth 2-part tutorial on writing shaders in WebGL
- If you're interested in full on game development, here's a really cool tutorial involving a clone of a much loved Nintendo game.
- The cover image is taken from TextureHaven, a great place to get custom textures for 3D modelling.