In June 1996, Quake shook the FPS genre once again. When it first came out, it was one of the first games to make use of 3D textured models for pretty much everything (not the first one though, this honour goes to Bethesda's Terminator: Future Shock, which came out in late 1995). It featured a series of labyrinth-style levels, full of enemies, traps, an interactive environment, and awesome graphics and atmosphere in general.
Thanks to the use of BSP, a technique already used in Doom, it was possible to create huge levels with little impact on performance, and Quake made use of this extensively, as well as prerendered static lighting.
The first version of Quake was of course for MS-DOS, it used a very optimized Software Renderer written in C and x86 assembly (video cards weren't really a thing until the 3dfx Voodoo came out a few months later), and it supported resolutions ranging from 320x200, up to 1280x1024. Of course, most people had to play at 320x240, or if you were really rich you could maybe play at 640x480, which is the best way to play Quake in my opinion, even today.
With the arrival of the first consumer GPUs, glQuake also came along. It ran a lot better than the original of course, since it was hardware accelerated, but it looked terrible compared to the original because many features from the original renderer were missing, most notably the lighting was very poor, which ruined the atmosphere.
After Quake was open sourced, of course, this terrible mess was improved and QuakeSpasm and other really good ports will look pretty much identical to the original. One feature however is still missing, and it's the one that I'll talk about in this article: fluids.
The original Quake used an interesting approach to warp textures as they are drawn onto a surface that was flagged as fluid. This effect is not easily replicated without modern shaders, so what most modern source ports do is just tessellate the water surface and move the vertexes around using the original algorithm. It works, but it doesn't look as good as the original, especially upclose.
As far as I know, the only modern source port that still supports the original software renderer is Mark V WinQuake, and it's the one I recommend if you want to play Quake on a modern machine.
So how does this warp effect work? Let's take a look at this capture from the original version of Quake:
In the original Quake, a fluid surface was simply flat, with a texture applied on it, and marked as fluid so that the engine would animate it. This makes things a lot simpler because we can treat this as a regular surface with a 2D texture.
So, we have our fluid surface and an observer looking at it. We can trace a ray from the observer and onto the surface and see where they intersect.
We project this intersection point onto the surface and call the coordinates relative to the surface x and y, starting from 0,0 in the upper left corner of the surface (as it's common in computer graphics).
This image sums up what I just said:
This is the texture that we're going to map on to the surface:
Let's call this tex.
Quake assumed that these textures were 64x64 for simplicity, but we will consider any size texW x texH.
Coordinates in this texture also start from the upper left corner, as usual.
If we have x and y of the point metioned before and a scale for the texture , mapping the texture is very simple:
mappedX=(~~(x*scale))%texW;
mappedY=(~~(y*scale))%texH;
if(mappedX<0) mappedX+=texW;
if(mappedY<0) mappedY+=texH;
Now mappedX and mappedY tell us which pixel inside tex is the one that we want to display.
JavaScript stores image data in byte arrays of size 4*width*height, stored as RGBARGBARGBA... starting from the upper left corner and going left to right, up to down, so we need to consider that to copy the pixel.
Let's call out the array representing our output surface of size outW x outH, and tex the texture array mentioned before.
p=4*(y*outW+x); //index of the pixel to write to
tp=4*(mappedY*texW+mappedX); //index of the pixel to read from
out[p]=tex[tp];
out[p+1]=tex[tp+1];
out[p+1]=tex[tp+1];
out[p+2]=tex[tp+2];
At this point, we have our lava surface, but no animation yet.
The warp is done by adding a 2d function to the mapping that we just did.
Instead of pointing to
~~(x*scale)
and ~~(y*scale)
we'll point to
~~((x+something)*scale)
and ~~((y+something)*scale)
By looking at this animation we can see what the "something" function depends on
These values are used as input for a sine function, whose output is also multiplied by a variable intensity to make the animation more or less intense, which is then added to x and y before mappedX and mappedY are calculated.
The code should make this more clear:
mappedX=(~~((x/closeness+intensity*Math.sin(t*speed+y/closeness))*scale))%texW;
mappedY=(~~((y/closeness+intensity*Math.sin(t*speed+x/closeness))*scale))%texH;
if(mappedX<0) mappedX+=texW;
if(mappedY<0) mappedY+=texW;
Of course, finding values for scale,closeness, speed, intensity to get the exact same effect seen in Quake is a bit tricky.
Notice that inside the Math.sin function, we have swapped x and y. This makes the phase different and creates the warp effect that we want instead of simply a "breathing" effect. Here's a comparison showing the difference.
The left one is wrong, the right one is correct.Let's implement this algorithm in JavaScript, and draw it on a 2d Canvas element.
With JavaScript being JavaScript, we need to optimize the algorithm as much as possible. Here's my implementation:
var sinLUT=[];
for(var i=0;i<2*Math.PI;i+=0.01) sinLUT[sinLUT.length]=Math.sin(i)*16;
function sine(i){
return sinLUT[(~~(i>=0?i:-i)%sinLUT.length)];
}
function quakeFluid(texture,canvas,scale,resScale,speed,intensity,closeness){
if(!resScale||resScale<0.1) resScale=1;
if(!speed) speed=1;
if(!intensity||intensity>1.5||intensity<-1.5) intensity=1;
if(!closeness||closeness<=0) closeness=1;
canvas.isVisible=function(){
var r=canvas.getBoundingClientRect();
return r.top+r.height>=0&&r.left+r.width>=0&&r.bottom-r.height<=(window.innerHeight||document.documentElement.clientHeight)&&r.right-r.width<=(window.innerWidth||document.documentElement.clientWidth);
}.bind(this);
canvas.qfSetResScale=function(r){
if(!r||r<0.1) r=1;
resScale=r;
canvas.prevWidth=0;
canvas.prevHeight=0;
}.bind(this);
canvas.qfGetResScale=function(){
return resScale;
}.bind(this);
canvas.qfScale=scale;
canvas.qfSpeed=speed;
canvas.qfIntensity=intensity;
canvas.qfCloseness=closeness;
canvas.style.imageRendering="pixelated";
canvas.qfSetTexture=function(texture){
var tex=new Image();
tex.src=texture;
tex.onload=function(){
var qfTex=document.createElement("canvas");
qfTex.width=tex.naturalWidth;
qfTex.height=tex.naturalHeight;
qfTex.getContext("2d").drawImage(tex,0,0);
qfTex=qfTex.getContext("2d").getImageData(0,0,tex.naturalWidth,tex.naturalHeight);
canvas.qfTexW=qfTex.width;
canvas.qfTexH=qfTex.height;
var qfTexCopy=[];
for(var i=0;i<qfTex.data.length;i++) qfTexCopy[i]=qfTex.data[i];
canvas.qfTex=qfTexCopy;
}.bind(this);
}.bind(this);
canvas.qfSetTexture(texture);
canvas.qfFrame=function(){
if(canvas.qfTex==null||!canvas.isVisible()) return;
var ctx=canvas.getContext("2d");
var out=canvas.qfFrameBuffer.data;
var t=~~(new Date().getTime()*canvas.qfSpeed);
var compScale=canvas.qfCloseness*resScale*2;
var xOff,yOff,yM,xM,txM;
for(var y=0;y<canvas.height;y++){
yM=y*canvas.width;
for(var x=0;x<canvas.width;x++){
xM=4*(yM+x);
yOff=(~~(((y/compScale)+canvas.qfIntensity*sine(t/16+(x/compScale)*2))*canvas.qfScale))%canvas.qfTexH;
yOff=(yOff>=0?yOff:(canvas.qfTexH+yOff));
xOff=(~~(((x/compScale)+canvas.qfIntensity*sine(t/16+(y/compScale)*2))*canvas.qfScale))%canvas.qfTexW;
xOff=(xOff>=0?xOff:(canvas.qfTexW+xOff));
txM=4*(yOff*canvas.qfTexW+xOff);
out[xM]=canvas.qfTex[txM];
out[xM+1]=canvas.qfTex[txM+1];
out[xM+2]=canvas.qfTex[txM+2];
out[xM+3]=canvas.qfTex[txM+3];
}
}
ctx.putImageData(canvas.qfFrameBuffer,0,0);
}.bind(this);
var raf=function(){
if(canvas.prevWidth!=canvas.clientWidth||canvas.prevHeight!=canvas.clientHeight){
var newW=~~(canvas.clientWidth*resScale), newH=~~(canvas.clientHeight*resScale);
canvas.width=newW>8?newW:8;
canvas.height=newH>8?newH:8;
canvas.qfFrameBuffer=canvas.getContext("2d").createImageData(canvas.width,canvas.height);
canvas.prevWidth=canvas.clientWidth;
canvas.prevHeight=canvas.clientHeight;
}
canvas.qfFrame();
canvas.qfInterval=requestAnimationFrame(raf);
}.bind(this);
canvas.qfStop=function(){
cancelAnimationFrame(canvas.qfInterval);
}.bind(this);
raf();
}
If you're reading this code, there are a few things you should know:
Now that the code is out of the way, let's draw this on a Canvas.
<!DOCTYPE html>
<html>
<head>
<title>Lava!</title>
<script type="text/javascript" src="lava.js"></script>
</head>
<body>
<canvas id="demo" class="block"></canvas>
<script type="text/javascript">
quakeFluid("lava.png",document.getElementById("demo"),0.6,0.5,1,1,1); //start with "lava.png", scale=0.6, resScale=0.5, speed=1, intensity=1, closeness=1
</script>
</body>
</html>
These are the parameters for the quakeFluid function:
The parameters are not constant, they are stored in the canvas element and you can change them: