Click here to Skip to main content
15,748,477 members
Articles / Mobile Apps / Android
Posted 24 Sep 2014


20 bookmarked

Shadow Mapping with Android OpenGL ES 2

Rate me:
Please Sign up or sign in to vote.
5.00/5 (7 votes)
24 Sep 2014MIT5 min read
Simple and PCF shadow mapping algorithms (Bonus Article - Android Wild Card Category)


The source code of this tutorial you can find here:

The application itself (apk) you can download from here:

Shadow mapping is a solution for dynamic shadows. Many times they are computationally too expensive, especially for mobile phones, so it can be useful to see how they perform in simple cases.

In this tutorial I show basic shadow mapping and PCF (Percentage Close Filtering) with adjustable shadow map size and bias type so you can see how they perform on Android. The simple algorithm is much faster but it has two outputs for each pixel (shadow / no shadow), so the edges are usually aliased. PCF results in a smooth shadow as it computes the shadow value as average of the pixels arounds, but it turns out many times to be too slow to produce real time shadows.

Simple Shadow

Simple shadow mapping

PCF Shadow

PCF shadow mapping

There are much more possible variations of shadow mapping algorithms so feel free to play around and mix them as you like. Good tutorials can be found that I also used as source in this demo application:

Credit also goes to Shayan Javed - Getting started with OpenGL ES 2.0 shaders on Android.


In this tutorial I don't cover the basics of OpenGL, OpenGL ES 2.0 and Android development. But all this background you can find at Learn OpenGL ES tutorials and this OpenGL tutorial about shadow mapping.

Rendering the shadow map

The basic of shadow mapping is that we render the scene first as the light source would be the camera. In order to do that we create two View matrices and two Projection matrices, one for the light source and one for the camera. In the first step we pass the light source MVP matrix to the shaders.

From this step we only need the distance of objects from the light source which is called the shadow map. To use it later we store this depth values in a texture. On some android devices it's not possible to render depth values directly to a texture (GPUs without OES_depth_texture OpenGL extension) so we have to pack the depth values into RGBA components and later unpack them. To decide which method to use:

// Test OES_depth_texture extension
String extensions = GLES20.glGetString(GLES20.GL_EXTENSIONS);
if (extensions.contains("OES_depth_texture"))
    mHasDepthTextureExtension = true;

With or without OES_depth_texture the vertex and fragment shaders are also different. One group is with "depth_tex_" prefix and the other group is without. To make it easy to switch between shader programs I used a separate class ( to compile, link and store OpenGL program handles (based on this solution).

The shaders used for rendering shadow map:

  • (depth_tex_)v_depth_map.glsl
  • (depth_tex_)f_shadow_map.glsl

To make it clear: if your device has the extension, only the simpler shaders will run without packing and unpacking, so you can start with checking that shaders, as they are easier to understand.

The only shader here which is not straightforward is the fragment shader when packing to RGBA is necessary.


// Pixel shader to generate the Depth Map
// Used for shadow mapping - generates depth map from the light's viewpoint
precision highp float;

varying vec4 vPosition; 

// from Fabien Sangalard's DEngine 
vec4 pack (float depth)
    const vec4 bitSh = vec4(256.0 * 256.0 * 256.0,
                            256.0 * 256.0,
    const vec4 bitMsk = vec4(0,
                             1.0 / 256.0,
                             1.0 / 256.0,
                             1.0 / 256.0);
    vec4 comp = fract(depth * bitSh);
    comp -= comp.xxyz * bitMsk;
    return comp;

void main() {
    // the depth
    float normalizedDistance  = vPosition.z / vPosition.w;
    // scale -1.0;1.0 to 0.0;1.0 
    normalizedDistance = (normalizedDistance + 1.0) / 2.0;

    // pack value into 32-bit RGBA texture
    gl_FragColor = pack(normalizedDistance);

What happens here is that we encode the depth value (coordinate Z) to 4 components. You can find the explanation of math at the source if you want:

Rendering the scene

After we have the depth map we can use that information to decide if a pixel is in shadow or not. To calculate that we count for each fragment:

  • What it's coordinate from light point of view (we need lightMVP for that passed to the shader as uniform)
    vShadowCoord = uShadowProjMatrix * aPosition;
  • What is the depth value on depth map which belongs to this point
    vec4 shadowMapPosition = vShadowCoord / vShadowCoord.w;
    float distanceFromLight = texture2D(uShadowTexture,;
  • Is the fragment farther from light than the depth value? If yes the fragment is in shadow.
    //1.0 = not in shadow (fragmant is closer to light than the value stored in shadow map)
    //0.0 = in shadow
    return float(distanceFromLight > shadowMapPosition.z);

Shadows with different setup

In the demo application you can change shadow type and bias type of the shadow algorithm in the options menu. I could have put all algorithms in one shader, pass in uniforms and decide with if conditions which algorithm to use. The problem with this approach is that because of parallel computation of GPUs, both cases of conditions will be evaluated leading to poor performance that makes comparison of speed impossible. Another solution would be to use #ifdef and compile shader with different #define statements.

Constant / Dynamic bias

Common solution to remove shadow acne is to add a small error margin to depth value before comparing it to fragment distance from light source.

//add bias to reduce shadow acne (error margin)
float bias = 0.005;

//1.0 = not in shadow (fragmant is closer to light than the value stored in shadow map)
//0.0 = in shadow
return float(distanceFromLight + bias > shadowMapPosition.z);

After adding constant bias it's visible that shadow acne disappears, but another problem shows up which is called Peter Panning, as objects on the ground look like flying.

You can notice that shadow acne appers more likely on surfaces which are visible in smaller angle from the light source. This leads to another solution when bias is adjusted according to the normal vector of the surface.

//Calculate variable bias - from
float calcBias()
    float bias;
    vec3 n = normalize( vNormal );
    // Direction of the light (from the fragment to the light)
    vec3 l = normalize( uLightPos );
    // Cosine of the angle between the normal and the light direction,
    // clamped above 0
    //  - light is at the vertical of the triangle -> 1
    //  - light is perpendiular to the triangle -> 0
    //  - light is behind the triangle -> 0
    float cosTheta = clamp( dot( n,l ), 0.0, 1.0 );
     bias = 0.0001*tan(acos(cosTheta));
    bias = clamp(bias, 0.0, 0.01);
     return bias;

Image 3

No Bias / Constant Bias (0.005) / Dynamic Bias

Shadow map sizes

You can change shadow map size in the menu:

  • 0.5 displayWidth x 0.5 displayHeight
  • 1.0 displayWidth x 1.0 displayHeight
  • 1.5 displayWidth x 1.5 displayHeight
  • 2.0 displayWidth x 2.0 displayHeight

Bigger shadow map texture results to better shadow edges, but after some point it doesn't lead too significantly better result, so it doesn't worth to make much bigger than the resolution of the screen (especially that it makes the algorithm slower).

Simple shadow mapping / PCF shadow mapping

PCF algorithm is based on sampling depth map more times around the position of the current fragment. This means that if we use a window with size 4x4 the value of shadow can have 16 different values. This results a soft shadow and less aliased edges. The problem with this approach is that we will have 16 times more lookup at the depth map and 16 times more comparison which you can also recognize at the drop of FPS results.

Simple Shadow PCF Shadow

Cover shadow acne with diffuse lighting

Many articles write about how to solve shadow acne with adding bias. I was using a solution with the diffuse lighting component: if the vertex is not facing the light (gl_BackFace from light source's point of view), I just skip calculation of shadow in the fragment shader:

// Shadow
float shadow = 1.0;

// If fragment doesn't face light source
if (diffuseComponent < 0.01)
    shadow = 1.0;
    //if the fragment is not behind light view frustum
    if (vShadowCoord.w > 0.0) {

        shadow = shadowSimple();

        //scale 0.0-1.0 to 0.2-1.0
        //otherways everything in shadow would be black
        //shadow = (shadow * 0.8) + 0.2;

// Final output color with shadow and lighting
gl_FragColor = (vColor * (diffuseComponent + ambientComponent * shadow));

You can see the result here with using no bias:

Image 6

Thank you for reading!

Please send your feedback or comments.


This article, along with any associated source code and files, is licensed under The MIT License

Written By
Hungary Hungary
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

GeneralLukas Pin
lukas20178-Apr-16 17:54
lukas20178-Apr-16 17:54 
QuestionExcellent article! Pin
KalothIV17-Jun-15 4:43
KalothIV17-Jun-15 4:43 
QuestionCould not emulate non-depth texture circumstance and in practice the Alpha channel get lost Pin
jiangcaiyang6-Apr-15 19:25
jiangcaiyang6-Apr-15 19:25 
GeneralThanks for entering! Pin
Kevin Priddle25-Sep-14 4:45
professionalKevin Priddle25-Sep-14 4:45 
GeneralMy vote of 5 Pin
Lisa Shiphrah25-Sep-14 4:28
Lisa Shiphrah25-Sep-14 4:28 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.