Water Ripples Algorithm (GLSL shader implementation)

Working on EfectoMariposa that needs to process several layers of information for simulating an ecosystem on realtime, I spend some time researching common generative algorithms. Because it have to be on real-time the best option ( and maybe only option ) is to work on the GPU that is specially designed to super fast process in paralel. Commonly used on 3D video games. By programming GLSL Shaders it’s posible to alter and replace the default graphic pipeline for another that treat each one of the pixels of it in a different way. The power on this type of programs is that each pixel will be process by a dedicated parallel processor. This gives you super-processing-power at a very freaking-fast velocity.

Luckily openFrameworks made this process of altering the pipeline programs really easy. GLSL Shaders on openGL it’s a very big subject. If you are not familiar with it you can see this excellent article that explain & explore them using Processing.

In this article I want to focus on a special and simple effect. 2D Water ripples. This is a good example of who GLSL Shader Parallel Programming works different form regular sequential CPU paradigm.

We can found a good tutorial wrote by Hugo Elias about how to implement this on CPU using sudo-code. It’s basically three simple algorithms working together that:

  • A – propagate the information of a pixel a long his neighbors
  • B – diffuse or smooth this information progressively
  • C – Apply or map this dissipation to a texture.

The main difference between CPU and GPU programming, it’s that we don’t need to tell the CPU to loop around the complete array of pixels. In fact that’s exactly what the Fragment Shader do. It’s basically a C program with some cool-native-embebed-functions that is run in each one of the pixels.

So for making a GLSL implementation of Hugo’s tutorial we don’t need to do the loops he mention. Just to adapt what is inside of those loops into GLSL Fragment Shaders

GPU Pipeline have one big limitation. Because it parallel processing design you can not write information to the same texture that you are reading. That’s why we are going to use a common technique call Ping-pong. This consist of using two textures ( buffers ) and we are going to pass the information from one to the other recursively.

Every time we pass the information from one to the other we are going to apply this algorithm to each of the pixels of the texture:

A & B – Dissipation: propagation + diffusion

Taking a deep look to Hugo’s tutorial sais:

To explain how and why this works, imagine a wave traveling across a 1-Dimensional surface (wave 0). This wave is traveling to the left. The small vertical arrows indicate the rate at which the water level changes with time. The fainter waves show the wave’s positions on previous frames. So how do we achieve the correct changes in height for each part of the wave (vertical arrows)?
You may notice that the height of the wave two frames older (wave 2), is proportional to the size of the arrows. So as long as the previous two frames are remembered, it is easy to work out the change in height of every part of the wave.
So, take a look at the code again. When the loop starts, Buffer1 contains the state of the water from the previous frame (wave 1), and Buffer2 has the state before that (wave 2). Buffer2 therefore has information about the vertical velocity of the wave.

  Velocity(x, y) = -Buffer2(x, y)

It is also important for the waves to spread out, so the buffers are smoothed every frame.

  Smoothed(x,y) = (Buffer1(x-1, y) +
                   Buffer1(x+1, y) +
                   Buffer1(x, y-1) +
                   Buffer1(x, y+1)) / 4

Now, to combine the two to calculate the new height of the water. The multiplication by two reduces the effect of the velocity.

  NewHeight(x,y) = Smoothed(x,y)*2 + Velocity(x,y)

Finally, the ripples must lose energy, so they are damped:

  NewHeight(x,y) = NewHeight(x,y) * damping

Translating this into GLSL Shader it looks like this:


uniform sampler2DRect prevBuffer; // previus buffer
uniform sampler2DRect actBuffer;  // actual buffer

uniform float damping;            // smoothing value between 0.0 - 1.0 

vec2 offset[4];                   // this is going to be the neighbors matrix
void main(){
   vec2 st = gl_TexCoord[0].st;   // store the position of the pixel we are working

   // Set the neighbors matrix
   //
   offset[0] = vec2(-1.0, 0.0);
   offset[1] = vec2(1.0, 0.0);
   offset[2] = vec2(0.0, 1.0);
   offset[3] = vec2(0.0, -1.0);

   // "sum" is going to store the total value of the neighbors pixels
   //
   vec3 sum = vec3(0.0);
   for (int i = 0; i < 4 ; i++){
      sum += texture2DRect(actBuffer, st + offset[i]).rgb;
   }

   // make an average of this total
   //
   sum = sum / 4.0;

   // calculate the diference between that average and the value of the center pixel 
   // this is like adding the velocity
   //
   sum = sum*2.0 - texture2DRect(prevBuffer, st).rgb;

   // smooth this value
   //
   sum = sum * damping;

   // write this information on the other texture ( buffer )
   //
   gl_FragColor = vec4(sum, 1.0);
}

C – Map the dissipation to an image

Well the dissipation of the wave it’s just half of the problem. Now we need to apply the displacement of the water waves to a texture.

For that we are going to use another well-know technique call displacement-mapping. For that there are a lot of ways to implement this specially for 3D spaces (take a look to this article on GPU Gems).

But for this case we are going to use a very simple one that it will calculate looking at each pixel neighbors in witch orientation it’s looking at and then displace the local pixel in that direction.


    uniform sampler2DRect tex0; //  background
    uniform sampler2DRect tex1; //  displacement
    
    void main(){
        vec2 st = gl_TexCoord[0].st;
        
        float offsetX = texture2DRect(tex1, st + vec2(-1.0, 0.0)).r - texture2DRect(tex1, st + vec2(1.0, 0.0)).r;
        float offsetY = texture2DRect(tex1, st + vec2(0.0,- 1.0)).r - texture2DRect(tex1, st + vec2(0.0, 1.0)).r;
        
        float shading = offsetX;
        
        vec3 pixel = texture2DRect(tex0, st + vec2(offsetX, offsetY)).rgb;
        
        pixel.r += shading;
        pixel.g += shading;
        pixel.b += shading;
        
        gl_FragColor.rgb =  pixel;
        gl_FragColor.a = 1.0;
    }

Well that’s it. If you are interested on using this on your openFrameworks project you can get the ofxFX addon at: https://github.com/patriciogonzalezvivo/ofxFX

It you just want to see just the implementation of it take a look to: https://github.com/patriciogonzalezvivo/ofxFX/blob/master/src/interactive/ofxWater.h
Well

Enjoy this information and share it with others.
Here is a video of the final result

For a CPU implementation check this code: https://github.com/patriciogonzalezvivo/patriciogv_algo2012/tree/master/week5-waterRipple

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>