My notes from lectures in computer vision at University of Twente.
Table of Contents:
- Learning Theory π
- Putting it into practice π¨
Theory
What is Computer Visionβs ultimate goal β Emulate human vision, getting a computer to derive meaningful information from digital images.
What is an image ?
An image can be defined from different perspectives:
Mathematical Perspective
From a mathematical perspective, an image is a function that quantifies light-intensity on a 2d plane
for grayscale image
for rgb image
Computer Perspective
From a computer science perspective, an image is a matrix with the dimensions
(h, w, 1)
for grayscale images and (h, w, 3)
for rgb images. the length of
the last array is called channels.
There are differrent pixel formats. We consider:
- L for grayscale.
- RGB
- 1 (black and white)
π Python library: PILLOW
- allows to extract the mode of pixels used
- look up the attribute mode of the library Image.
- you can convert from one mode to another.
An image loaded with PIL
or other, needs to be converted to a numpy
array.
img = np.array(image)
- the shape of the img would then be
(h, w, 1)
or(h, w, 3)
Geometric Transformations
- Also known as image wraping
- Transformations that concern the shape of the image (the domain) and not the brightness/intensity (range) of the pixels themselves.
Idea of using transformations is to do data augmentation.
β create more data out of thin air using transformations on images.
Image Scaling
To scale a point x,y by a scalar lambda:
To reverse this operation:
Image Rotation
Image Mirroring
Edge Detection
- useful in detecting objects in an image, as edges mark the boundaries of objects.
- useful in image segmentation.
- it is a way to extract information from an image.
One way of detecting edges is to apply a filter to the image to highlight the light intestities. Existing filters to detect edges are for example:
- prewitt filter
- sobel filter
Deep Neural Networks
stack logistic neurons.
- input layer
- hidden layers
- output layer
Works the same way as logistic regression:
- forward pass
- weights * input + biasβ result
- loss function measures the quality of the prediction.
- back propagation: improves the weights and biases.
Examples of loss functions:
- mean squared error
- mean absolute loss
- categorical cross entropy
- binary cross entropy
Shape Check
(2, 3) . (3,1) β (2, 1)
(2, 1) + (2, 1) β (2, 1)
(neurons l2, neurons l1) (neurons l1, 1) + (neurons l2, 1)
(neurons l2, neurons l1) (neurons l1, training set rows) + (neurons l2, 1)*
*broadcasted, to number of training set rows.
Forward Pass
binary classification β 1 neuron
multi class classification β as many neurons as classes
Log Loss
Formula:
- variables
- ne: number of exemplaires, training rows.
- yi: binary indicator (0/1) if class label c is correct classification for observation
- pi: predicted probability observation o is of class c
So predicting a probability of .012 when the actual observation label is 1 would be bad and result in a high loss value. A perfect model would have a log loss of 0.
The graph shows the range of possible loss values given a true observation (isDog = 1). As the predicted probability approaches 1, log loss slowly decreases. As the predicted probability decreases, however, the log loss increases rapidly. Log loss penalizes both types of errors, but especially those predictions that are confident and wrong!
Back Propagation
Tweak all the parameters of the network to minimize the used loss function.
- we need to find out how to tweak the vector w of layer L
- we need to find out how to tweak the vector b of layer L
- recursively on all layers.
We want to check how sensitive is C0 to small nudges in w / b
also known as
how w affects C0 is via w β z β a β C0
how b affects CO is via b β z β a β C0
there is a term defined dZ, which is
We need to take these partial derivatives. The results are:
Example:
Learning Algorithm
- init W and b to random values.
- repeat for a predefined number of iterations
- forward pass
- gradient descent
- return the final values for W and b
Sigmoid
image values can go from 0 - 255
β feature scaling
Exploding and vanishing gradients
- if your neural network grows deep you might suffer a problem with vanishing or exploding gradients.
- the w updates start from the output later and progress towards the layer bp.
- the w updates diminish as they traverse the network backwards, leaving some of the W of the earlier layers almost unchanged.
- similarly when you init you model params, your cost function may be too high impacting the updates of these parameters causing exploding gradients.
- sigmoid also problematic
Overfitting
Multiclassification
- binary classification
- log loss / binary cross entropy
- softmax
- multiclassification
binarycross entropy- softmax
-
variables
y β hot encoded ground truth
p β probability distribution
-
-
Convolutional Neural Networks
process of applying a filter to an image β convolution
Prewit filter
Feature Map Size = image height - kernel height + 1, image width - kernel width + 1
(6 - 3 + 1, 6-3+1) β (4, 4)
assuming stride = 1
STRIDE
- horizontal stride = step to the right
- vertical stride = step down
- if you donβt pick stride properly you ignore parts of the image.
The new Image size given if a kernel is applied is:
Padding
- to keep the dimensions of the image in the output we apply padding.
- solve the issue of the image reducing in size. in this way we can apply lots of filters.
- extra advantage: more visits to pixels, information is more present in the output.
for example padding 6x6 image to 8x8.
(8-3+1, 8-3+1) β (6,6)
Filters
- laplacian
- sobel
- prewitt
Prior to the emergence of deep learning, human experts detected these filters.
With deep learning we can learn the right filters for he task at hand. (classification, recongnition, deblur, β¦etc)
ReLU
- Our initial image is made of pixels, each of which quantifies the intensity of light (positive)
- while training our model, the value of the kernel might be negative, resulting in negative values for these pixels in the feature map. (output image)
- suffices to convert the negative values to zero using ReLU
Max Pooling
- locality of pixel dependencies
- neighboring pixels tend to be correlated.
- high intensity β neighbour high likelyhood of being high aswell.
- we can reduce the size of an image by considering only a pixel per block of pixels (size becomes a hyperparameter)
Resulting Matrix Size: For an image of 4 x 4:
(4-2)/2 +1
β (2,2)
CNN Layers
- input
- filter
- pooling β repeat.
- flattened input
- Deep Neural Network
Example network:
model = keras.models.Sequential()
model.add(keras.layers.Input((28, 28, 1)))
model.add(keras.layers.Conv2D(8, (3, 3), activation='relu'))
model.add(keras.layers.MaxPooling2D((2,2)))
model.add(keras.layers.Conv2D(32, (3, 3), activation='relu'))
model.add(keras.layers.MaxPooling2D((2,2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(100, activation='sigmoid'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
The generated model takes the grayscale (28 x 28) picture, and applies 8 different filters to it. These filters are learned thanks to gradient descent. The layer outputs 26 x 26 images to the next layer. The next layer is max pooling which reduces the image to a 13 x 13 representation by taking the maximum βpixelβ value found, in the 2 x 2 regions. This process is then repeated but with 32 filters. the outputs being first of shape 11, 11, 32 then 5, 5 32. After this convolutional part the image is passed on too the Deep Neural Network. this has to be done by flattening the 32: 5 x 5 images this generates a 800 pixel array. This pixel array is the input which is connected to a layer of 100 neurons, which then goes to the output classification with one neuron.
Putting it into Practice
Kaggle - MNIST Competition
Key Points:
- Uses keras preprocessing layers, instead of custom written python code.
- Uses keras data augmentation layers, instead of custom written python code.
- models with an augmented training set outperform other models on the test set.
- acheives
98.7%
accuracy on the competition test set.