Dr. Sergey Kosov
University Lecturer and Entrepreneur
primary ray
shadow ray
primary ray
shadow ray
transmitted ray
reflected ray
primary ray
shadow ray
transmitted ray
reflected ray
primary ray
shadow ray
transmitted ray
reflected ray
Soft Shadows
primary ray
shadow ray
transmitted ray
reflected ray
Soft Shadows
Glossy surfaces: blurry reflection, rough specular
primary ray
shadow ray
transmitted ray
reflected ray
Soft Shadows
Glossy surfaces: blurry reflection, rough specular
Translucency
Instead of point sampling the color of a pixel with a ray, we cast multiple rays from eye (primary rays) through different parts of one pixel and average down the results
Instead of point sampling the color of a pixel with a ray, we cast multiple rays from eye (primary rays) through different parts of one pixel and average down the results
For example, cast \(n\times n\) sub-pixels rays, and average the results together:
\[c_{pixel}=\frac{1}{n^2}\sum^{n-1}_{i=0}\sum^{n-1}_{j=0}c_{subpixel(i,j)}\]
Samples taken at non-uniformly spaced random offsets
Replaces low-frequency aliasing pattern by noise, which is less objectionable to humans
uniform sampling
stochastic
sampling
Samples taken at non-uniformly spaced random offsets
Replaces low-frequency aliasing pattern by noise, which is less objectionable to humans
However, with random sampling, we could get unlucky, e.g., all samples in one corner
uniform sampling
stochastic
sampling
To prevent clustering of the random samples, divide \([0, 1)^2\) domain (pixel) into non-overlapping regions (sub-pixels) called strata
Take one random sample per stratum
Jittered sampling is stratified sampling with per-stratum sample taken at an offset from the center of each stratum:
uniform sampling
stratified
sampling
Aliasing happens in time as well as in space
Sample objects temporally
The result is still-frame motion blur and smooth animation
Darkness caused when part or all of the illumination from a light source is blocked by an occluder (shadow caster)
Point light sources give unrealistic hard shadows
Light sources that extend over an area (area light sources) cast soft-edged shadows
Cast multiple shadow rays from surface, distributed across the surface of the light: each ray to a different point on the light source
At each point, sum the contributions of shadow rays from that point to find the strength of shadow:
\[\frac{hits\cdot 100\%}{rays} = illuminated\%\]
20% illuminated
Vec3f CShaderPhong::shade(const Ray& ray) {
...
Ray shadow(ray.hitPoint()); // shadow ray
for (auto& pLight : m_scene.getLights()) { // for each light source
Vec3f L = Vec3f::all(0); // incoming randiance
// get direction to light, and intensity
Vec3f radiance = pLight->illuminate(shadow); // shadow.dir is updated
if (radiance && !m_scene.if_intersect(shadow)) { // if not occluded
// ------ diffuse ------
if (m_kd > 0) {
float cosLightNormal = shadow.dir.dot(n);
if (cosLightNormal > 0)
L += m_kd * cosLightNormal * color.mul(radiance);
}
// ------ specular ------
if (m_ks > 0) {
float cosLightReflect = shadow.dir.dot(reflected.dir);
if (cosLightReflect > 0)
L += m_ks * powf(cosLightReflect, m_ke) * specularColor.mul(radiance);
}
}
res += L;
}
...
}
Vec3f CLightOmni::illuminate(Ray& ray) {
// ray towards point light position
ray.dir = m_org - ray.org;
ray.t = norm(ray.dir);
ray.dir = normalize(ray.dir);
ray.hit = nullptr;
double attenuation = 1 / (ray.t * ray.t);
return attenuation * m_intensity;
}
A point light source may be sampled with only one sample
Vec3f CShaderPhong::shade(const Ray& ray) {
...
Ray shadow(ray.hitPoint()); // shadow ray
for (auto& pLight : m_scene.getLights()) { // for each light source
Vec3f L = Vec3f::all(0); // incoming randiance
size_t nSamples = pLight->getNumSamples(); // number of samples
for (size_t s = 0; s < nSamples; s++) {
// get direction to light, and intensity
Vec3f radiance = pLight->illuminate(shadow); // shadow.dir is updated
if (radiance && !m_scene.if_intersect(shadow)) { // if not occluded
// ------ diffuse ------
if (m_kd > 0) {
float cosLightNormal = shadow.dir.dot(n);
if (cosLightNormal > 0)
L += m_kd * cosLightNormal * color.mul(radiance);
}
// ------ specular ------
if (m_ks > 0) {
float cosLightReflect = shadow.dir.dot(reflected.dir);
if (cosLightReflect > 0)
L += m_ks * powf(cosLightReflect, m_ke) * specularColor.mul(radiance);
}
}
}
res += L / nSamples // average the resulting radiance
}
...
}
Vec3f CLightOmni::illuminate(Ray& ray) {
Vec2f sample = m_pSampler->getNextSample();
Vec3f org = m_org + sample.val[0] * m_edge1 + sample.val[1] * m_edge2;
setOrigin(org);
Vec3f res = CLightOmni::illuminate(ray);
double cosN = -ray.dir.dot(m_normal) / ray.t;
if (cosN > 0) return m_area * cosN * res.value();
else return Vec3f::all(0);
}
A point light source may be sampled with only one sample
Whereas area light source need multiple samples
Vec3f CShaderPhong::shade(const Ray& ray) {
...
Ray shadow(ray.hitPoint()); // shadow ray
for (auto& pLight : m_scene.getLights()) { // for each light source
Vec3f L = Vec3f::all(0); // incoming randiance
size_t nSamples = pLight->getNumSamples(); // number of samples
for (size_t s = 0; s < nSamples; s++) {
// get direction to light, and intensity
Vec3f radiance = pLight->illuminate(shadow); // shadow.dir is updated
if (radiance && !m_scene.if_intersect(shadow)) { // if not occluded
// ------ diffuse ------
if (m_kd > 0) {
float cosLightNormal = shadow.dir.dot(n);
if (cosLightNormal > 0)
L += m_kd * cosLightNormal * color.mul(radiance);
}
// ------ specular ------
if (m_ks > 0) {
float cosLightReflect = shadow.dir.dot(reflected.dir);
if (cosLightReflect > 0)
L += m_ks * powf(cosLightReflect, m_ke) * specularColor.mul(radiance);
}
}
}
res += L / nSamples // average the resulting radiance
}
...
}
Vec3f CLightArea::illuminate(Ray& ray) {
Vec2f sample = m_pSampler->getNextSample();
Vec3f org = m_org + sample.val[0] * m_edge1 + sample.val[1] * m_edge2;
setOrigin(org);
Vec3f res = CLightOmni::illuminate(ray);
double cosN = -ray.dir.dot(m_normal) / ray.t;
if (cosN > 0) return m_area * cosN * res.value();
else return Vec3f::all(0);
}
A point light source may be sampled with only one sample
Whereas area light source need multiple samples
Area light sampling is another way to describe the standard rendering equation:
$$ L_o(p,\omega_o) = L_e(p,\omega_o) + \int_{H^2(\vec{n})} f_r(p,\omega_o,\omega_i)L_i(p,\omega_i)\cos\theta_i d\omega_i$$
by integrating not over the hemisphere, but over the area of the light source:
$$ L_o(p,\omega_o) = L_e(p,\omega_o) + \int_{t\in S} f_r(p,\omega_o,\omega_i)L_o(p,-\omega_i) \frac{\cos{\theta_i}\cos{\theta_o}}{(p-t)^2} dA_y $$
integration over a hemisphere
integration over the light source
Uniform distribution gives rise to sharp transitions / patterns inside penumbra
Primary rays: 4 / pixel
Shadow rays: 1 / light (centered)
Overall: 4 rays / pixel
Primary rays: 4 / pixel
Shadow rays: 16 / light (uniform)
Overall: 64 rays / pixel
Area light represented as a rectangle in 3D, each ray-object intersection samples the area-light at random:
\[\vec{r} = \vec{c} + \xi_1\vec{a} + \xi_2\vec{b},\]
where \(\xi_1\) and \(\xi_2\) are random variables
Stratified sampling of the area light with samples spaced uniformly plus a small perturbation: \[\left\{\vec{r}(i, j),0\leq i,j\leq n-1\right\}\]
As with stochastic super-sampling for anti-aliasing, light sampling rate must be high, otherwise high-frequency noise becomes visible
Primary rays: 4 / pixel
Shadow rays: 1 / light
Overall: 4 rays / pixel
Primary rays: 4 / pixel
Shadow rays: 16 / light Overall: 64 rays / pixel
The total number of samples is the same in both cases, but because the version with 16 shadow samples per 1 image sample is able to use an Stratified Sampling, all of the shadow samples in a pixel’s area are well distributed, while in the first image the implementation here has no way to prevent them from being poorly distributed.
Primary rays: 16 / pixel
Shadow rays: 1 / light
Overall: 16 rays / pixel
Primary rays: 1 / pixel
Shadow rays: 16 / light Overall: 16 rays / pixel
The total number of samples is the same in both cases, but because the version with 16 shadow samples per 1 image sample is able to use an Stratified Sampling, all of the shadow samples in a pixel’s area are well distributed, while in the first image the implementation here has no way to prevent them from being poorly distributed.
Primary rays: 16 / pixel
Shadow rays: 1 / light
Overall: 16 rays / pixel
Primary rays: 1 / pixel
Shadow rays: 16 / light Overall: 16 rays / pixel
?
4096 random samples
4096 random samples
4096 random samples
4096 random samples
4096 random samples
4096 random samples
4096 stratified samples
4096 stratified samples
4096 uniform samples
4096 stratified samples
4096 uniform samples
4096 stratified samples
Disc:
Sphere:
To sample direction vectors from a cosine-weighted distribution, uniformly sample points on the unit disk and project them up to the unit hemisphere.
Vec3f CSampler::cosineSampleHemisphere(const Vec2f& sample) {
Vec2f s = concentricSampleDisk(sample);
float z = sqrtf(max(0.0f, 1.0f - s[0] * s[0] - s[1] * s[1]));
return Vec3f(s[0], s[1], z);
}
optical axis
Focus Plane
sensor
DoF
The Circle of Confusion (CoC) is the maximum size that a blur spot, on the image captured by the camera sensor, will be seen as a point in the final image.
Object is considered in focus if on an 8×10 print viewed at a distance of 10”, diameter of CoC ≤ 0.01” (1930’s standard!)
CoC
optical axis
Focus Plane
sensor
DoF
The Circle of Confusion (CoC) is the maximum size that a blur spot, on the image captured by the camera sensor, will be seen as a point in the final image.
Object is considered in focus if on an 8×10 print viewed at a distance of 10”, diameter of CoC ≤ 0.01” (1930’s standard!)
CoC
Real cameras have lenses with focal lengths
The range of distances that appear in focus is the depth of field
Depth of field can be simulated by distributing primary rays through different parts of a lens assembly
optical axis
focus plane
image plane
CoC
lens
optical axis
focus plane
image plane
CoC
lens
Or simply select eye positions randomly from a square region
Ray tracing simulates perfect specular reflection, true only for perfect mirrors and chrome surfaces
Most surfaces are imperfect specular reflectors:
For each ray-object intersection
perfect mirror
polished surface
Instead of mirror images:
Nearby objects reflect more clearly because distribution still narrow
Farther objects reflect more blurry because distribution has spread
Scene rendered with a relatively ineffective sampler (left) and a carefully designed sampler (right), using the same number of samples for each. The improvement in image quality, ranging from the edges of the highlights to the quality of the glossy reflections, is noticeable.
Primary rays: 4 / pixel
Shadow rays: 16 / light
Reflection rays: 16 / point (random)
Primary rays: 4 / pixel
Shadow rays: 16 / light
Reflection rays: 16 / point (stratified / cosine sampling)
Scene rendered with a relatively ineffective sampler (left) and a carefully designed sampler (right), using the same number of samples for each. The improvement in image quality, ranging from the edges of the highlights to the quality of the glossy reflections, is noticeable.
Primary rays: 4 / pixel
Shadow rays: 16 / light
Reflection rays: 16 / point (random)
Primary rays: 4 / pixel
Shadow rays: 16 / light
Reflection rays: 16 / point (stratified / cosine sampling)
Similar, but for refraction
Transparency
Translucency
By Dr. Sergey Kosov