Image Processing

Introduction

Image processing is the analysis and manipulation of a digitized images  

It can be used to increase the quality of an image, like with Instagram filters (arguably) or Adobe Lightroom

or even to detect features and identify entire objects within an image

It is the foundation for computer vision (CV) techniques, which allow computers to identify different objects

a monumental task when considering all a computer really sees in an image are grids of numbers

Modern CV techniques draw inspiration from how our own mind classifies objects, identifying edges/textures from numbers

then abstracting those edges/textures into features that comprise known objects

The classification of sets of features into objects is based in machine learning

but having a reliable feature-set relies on quality image processing

Color Spaces

Grayscale

Image processing starts with selecting an appropriate color-space

The easiest color-space to understand is the monochromatic one

Each pixel in this color-space consists of a single value ranging from 0 to 255

(or 0 to 1 for float representation)

Pixels with higher values have higher intensity and are closer to white

and pixels with lower values have lower intensity and are closer to black

 

# Say we're given a 4x4 grayscale image 

# Say we're given a 4x4 grayscale image
# If it was completely black one, what would it look like?
 

# Say we're given a 4x4 grayscale image
# If it was completely black one, what would it look like?

    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0

 

# Say we're given a 4x4 grayscale image
# If it was completely black one, what would it look like?

    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0


# How about a completely white one? 

# Say we're given a 4x4 grayscale image
# If it was completely black one, what would it look like?

    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0


# How about a completely white one?

    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Say we're given a 4x4 grayscale image 

# Say we're given a 4x4 grayscale image
# How about one with left half black and right half white? 
 

# Say we're given a 4x4 grayscale image
# How about one with left half black and right half white? 

    0    0    255  255
    0    0    255  255
    0    0    255  255
    0    0    255  255
 

# Say we're given a 4x4 grayscale image
# How about one with left half black and right half white? 

    0    0    255  255
    0    0    255  255
    0    0    255  255
    0    0    255  255

# And one with top half black and bottom half white?
 

# Say we're given a 4x4 grayscale image
# How about one with left half black and right half white? 

    0    0    255  255
    0    0    255  255
    0    0    255  255
    0    0    255  255

# And one with top half black and bottom half white?

    0    0    0    0
    0    0    0    0
    255  255  255  255
    255  255  255  255

# Say we're given a 4x4 grayscale image 

# Say we're given a 4x4 grayscale image
# What about one that evenly went from B to W going L to R? 
 

# Say we're given a 4x4 grayscale image
# What about one that evenly went from B to W going L to R? 

    0    85   170  255
    0    85   170  255
    0    85   170  255
    0    85   170  255    # answer may vary!
 

# Say we're given a 4x4 grayscale image
# What about one that evenly went from B to W going L to R? 

    0    85   170  255
    0    85   170  255
    0    85   170  255
    0    85   170  255    # answer may vary!

# And one that was a single share of some mid-tier gray? 

# Say we're given a 4x4 grayscale image
# What about one that evenly went from B to W going L to R? 

    0    85   170  255
    0    85   170  255
    0    85   170  255
    0    85   170  255    # answer may vary!

# And one that was a single share of some mid-tier gray?

    120  120  120  120
    120  120  120  120
    120  120  120  120
    120  120  120  120

RGB

The most common multi-layer color-space most people are familiar with is the Red Green Blue (RGB) color-space

A pixel in an RGB image is represented much like a grayscale one

However, an RGB pixel will have three values, each one representing the intensity of one of the three colors

Let's view some examples

 

# Let's view the purely black image from our previous slide
# except this time represented in the RGB colorspace
 

# Let's view the purely black image from our previous slide
# except this time represented in the RGB colorspace

# Red Layer
    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Green Layer
    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Blue Layer
    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0
 

# Let's view the purely black image from our previous slide
# except this time represented in the RGB colorspace

# Red Layer
    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Green Layer
    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Blue Layer
    0    0    0    0
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Pretty straightforward right?

# The white one's fairly intuitive as well
 

# The white one's fairly intuitive as well

# Red Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255
 

# The white one's fairly intuitive as well

# Red Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Green Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255
 

# The white one's fairly intuitive as well

# Red Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Green Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Blue Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255 

# The white one's fairly intuitive as well

# Red Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Green Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Blue Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# A maxed out value in each of the layers would result in 
# the brightest possible pixel

# How about a purely red image? Bet you could guess this one
 

# How about a purely red image? Bet you could guess this one

# Red Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255 

# How about a purely red image? Bet you could guess this one

# Red Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Green Layer    
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Blue Layer
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0 

# How about a purely red image? Bet you could guess this one

# Red Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Green Layer    
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Blue Layer
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0

# The above would result in the brightest possible red 
# image your computer is capable of displaying

# Similar logic can be applied to green..
 

# Similar logic can be applied to green..

# Red Layer    
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Green Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255

# Blue Layer
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0

# .. and blue as well
 

# .. and blue as well

# Red Layer    
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Green Layer
    0    0    0    0 
    0    0    0    0
    0    0    0    0
    0    0    0    0

# Blue Layer
    255  255  255  255
    255  255  255  255
    255  255  255  255
    255  255  255  255



# But what if we wanted a dimmer value of blue?
# Then we could simply lower the values of the blue color space
 

# But what if we wanted a dimmer value of blue?
# Then we could simply lower the values of the blue color space

# Red Layer    
    0    0    0    0 
    0    0    0    0
    ...

# Green Layer
    0    0    0    0 
    0    0    0    0
    ...
 

# But what if we wanted a dimmer value of blue?
# Then we could simply lower the values of the blue color space

# Red Layer    
    0    0    0    0 
    0    0    0    0
    ...

# Green Layer
    0    0    0    0 
    0    0    0    0
    ...

# Blue Layer
    150  150  150  150
    150  150  150  150
    150  150  150  150
    150  150  150  150
 

# But what if we wanted a dimmer value of blue?
# Then we could simply lower the values of the blue color space

# Red Layer    
    0    0    0    0 
    0    0    0    0
    ...

# Green Layer
    0    0    0    0 
    0    0    0    0
    ...

# Blue Layer
    150  150  150  150
    150  150  150  150
    150  150  150  150
    150  150  150  150

# This would lower the overall intensity and result in a
# dimmer shade of blue

Alternate Color Spaces

HSV is another color space represents pixels in terms of their hue, saturation, and value (intensity)

La*b* is a color space that defines pixels in terms of their Light intensity and a combination of the a/b color spaces

Both are used more commonly in research than RGB as they are more representationally sound and tend to have better results when processed

However, RGB is more intuitive for most people to understand, so we will be using RGB and grayscale in our labs

Just know that the other two color-spaces exist and how it is they represent color

Edge Detection

Naive Method

If I asked you to find an edge given a grayscale image represented by pixel-intensity values how would you do it?

Intuitively, you would likely use the technique described here by searching for sharp contrasts in intensity

This could work successfully if your image is comprised of distinct boundaries

But what if it wasn't?

There are a lot of instances where this method performs poorly

Before we continue, though, let's first define image noise

Noise is simply the presence of incorrect pixel values

It can come in the form of little specks white specks, dark spots or anything in between

Noise is more prominent in older photography, but also common in modern images when the ISO (sensor sensitivity) is raised for low-light photography

Now back to our main point

Using the previously described method, our detector will identify sharp contrasts in intensity as potential edge boundaries 

This means any noise in the image, which likely has sharp contrast with its neighboring pixels..

will likely be classified as potential edge boundaries

This means any substantial noise will throw off our previously described algorithm

So how do we define a method that can account for noise and have a more reliable accuracy?

Actual Method

The trick is to process our image in advance before searching for edges

This can be done using an image-kernel as described here 

Using an image kernel we would apply an appropriate gaussian blur

Once we have accounted for noise in our image we can start searching for edges

But how do we determine an appropriate threshold to classify an edge when searching our pixel's neighbors?

This can be done using kernel-convolution as well

The Sobel Operator applies gradient-kernels to determine the intensity and orientation of change over a single pixel

The Canny Edge Detector then introduces further processing steps to the output to improve accuracy by thresholding

Homographies

Everything

Watch the following lecture video on image geometry

The technical portion you are responsible for starts at 21:40

But you should at least be familiar with all of the information outlined in the lecture 

OpenCV Library

Installation

Instructions on how to install the library on your Pi are listed here

Note you must install it using Python2.7 to be compatible with the GoPiGo library

Whether or not you setup the virtual environment described is up to you

It will be useful if you plan to continue using different libraries on your system, but it is not necessary for this course

Color-Based Detection

Shape Detection

Applying Homographies

References

Color Spaces

Edge Detection

Homographies

OpenCV

Made with Slides.com