Graphics Programming Virtual Meetup

Discord

Twitter

Tiny Renderer

Lesson 7

Shadow Mapping

Tutorial link:
https://github.com/ssloy/tinyrenderer/wiki/Lesson-7-Shadow-mapping


My Code:
https://github.com/cdgiessen/TinyRenderer

Shadow Mapping

A technique which gives a light 'occlusion'

 

 

This is so lights don't illuminate 'hidden' surfaces, eg surfaces that are behind others

 

 

The previous lighting calculations ignored that aspect

Implementation

Done in two passes

Start by generating a 'depth map' for the light source, aka shadow buffer

 

Use a depth only shader, one that computes depth values only from the point of view of the light

struct DepthShader : public IShader {
    mat<3,3> varying_tri;

    DepthShader() : varying_tri() {}

    virtual vec4f vertex(int iface, int nthvert) {
        // read the vertex from .obj file
        vec4f gl_Vertex = embed<4>(model->vert(iface, nthvert)); 
        // transform it to screen coordinates
        gl_Vertex = Viewport*Projection*ModelView*gl_Vertex;     
        varying_tri.set_col(nthvert, proj<3>(gl_Vertex/gl_Vertex[3]));
        return gl_Vertex;
    }

    virtual bool fragment vec3f bar, TGAColor &color) {
        vec3f p = varying_tri*bar;
        color = TGAColor(255, 255, 255)*(p.z/depth);
        return false;
    }
};

Shader Code

TGAImage depth(width, height, TGAImage::RGB);
lookat(light_dir, center, up); // use light_dir, not eye
viewport(width/8, height/8, width*3/4, height*3/4);
projection(0); //no projection required

DepthShader depthshader;
vec4f screen_coords[3];
for (int i=0; i<model->nfaces(); i++) {
    for (int j=0; j<3; j++) {
        screen_coords[j] = depthshader.vertex(i, j);
    }
    triangle(screen_coords, depthshader, depth, shadowbuffer);
}
depth.write_tga_file("depth.tga");

Executing that shader

"Shadow buffer"

Now we 'know' how far the light goes into the scene

Using a single depth buffer works well for 'spot lights' but fails for point and directional only lights.

 

Point lights typically employ 6 separate depth buffers

They are combined into a 'cube' when sampled from

 

Directional lights, specifically 'global' ones like the sun, often use cascading shadow maps, where multiple depth buffers of difference scales are created to cover the whole scene.

With a shadow buffer in hand, we can begin the second pass, generating the main output

 

While we have the shadow buffer, its values are local to the source texture, which was created with a different coordinate space

 

Thus we need to translate the points from its space to our space

 

Matrix Multiplication to the rescue!

lightM = light_Viewport * light_Projection * light_ModelView;
cameraM = camera_Viewport * camera_Projection * camera_ModelView;
shadowM = lightM * cameraM.invert();


virtual bool fragment(vec3f bar, TGAColor &color) {
    // corresponding point in the shadow buffer
    vec4f sb_p = shadowM * embed<4>(varying_tri*bar);
    sb_p = sb_p/sb_p[3];

Translate from the light's basis to the camera's basis then sample

float shadow = 1.0 * (shadowbuffer.get(sb_p[0], sb_p[1])<sb_p[2]);
...

for (int i=0; i<3; i++) 
    color[i] = 
        std::min<float>(20 + c[i]*shadow*(1.2*diff + .6*spec), 255);

We still don't know if a point is 'visible' to the light or not

Solution, its visible if the shadow buffer depth is less than depth in the scene

 

Here sb_p is the 'camera depth' at the current point but transformed into the basis of the shadow map, so we can compare them

Fill in the rest of the main shader from lesson 6, and voila!

A fix for the artifacts: fudge it till it looks good!

// magic coeff to avoid z-fighting
float shadow = .3+.7*(shadowbuffer[idx]<sb_p[2]+43.34);

Graphics Programming Virtual Meetup

Made with Slides.com