Feature Detection from An Image using OpenCV

0
532

Introduction To Feature Detection from An Image

Before Going into deep we need to understand what is Feature Detection from An Image, The answer is, we are looking for specific patterns or specific features which are unique can be easily tracked and can be easily compared with other image is called as Feature Detection from An Image.

The question is, how do you do it?

What about projecting the same theory to a computer program so that computers can play jigsaw puzzles? If the computer can play jigsaw puzzles, why can’t we give a lot of real-life images of good natural scenery to the computer and tell it to stitch all those images to a big single image? If the computer can stitch several natural images to one, what about giving a lot of pictures of a building or any structure and tell the computer to create a 3D model out of it?

Well, the questions and imaginations continue. But it all depends on the most basic question: How do you play jigsaw puzzles? How do you arrange lots of scrambled image pieces into a big single image? How can you stitch a lot of natural images to a single image?

If we go for a definition of such a feature, we may find it difficult to express it in words, but we know what they are. If someone asks you to point out one good feature which can be compared across several images, you can point out one. That is why even small children can simply play these games. We search for these features in an image, find them, look for the same features in other images, and align them. That’s it. (In a jigsaw puzzle, we look more into the continuity of different images). All these abilities are present in us inherently.

What are these features?

It is difficult to say how humans find these features. This is already programmed in our brain. But if we look deep into some pictures and search for different patterns, we will find something interesting. For example, take below image:

The image is very simple. At the top of the image, six small image patches are given. The question for you is to find the exact location of these patches in the original image. How many correct results can you find?

A and B are flat surfaces and they are spread over a lot of areas. It is difficult to find the exact location of these patches.

C and D are much more simple. They are edges of the building. You can find an approximate location, but the exact location is still difficult. This is because the pattern is the same everywhere along the edge. At the edge, however, it is different. An edge is therefore a better feature compared to a flat area, but not good enough (It is good in a jigsaw puzzle for comparing continuity of edges).

Finally, E and F are some corners of the building. And they can be easily found. Because at the corners, wherever you move this patch, it will look different. 

 So now we answered our question, 

  • “what are these features?”. But the next question arises. 
  • How do we find them? 
  • how do we find the corners?. 

We answered that in an intuitive way, i.e., look for the regions in images which have maximum variation when moved (by a small amount) in all regions around it. This would be projected into computer language in coming chapters. So finding these image features is called Feature Detection.

We found the features in the images. Once you have found it, you should be able to find the same in the other images. How is this done? We take a region around the feature, we explain it in our own words, like “upper part is blue sky, lower part is the region from a building, on that building, there is glass, etc” and you search for the same area in the other images. Basically, you are describing the feature. Similarly, a computer also should describe the region around the feature so that it can find it in other images. The so-called description is called Feature Description. Once you have the features and its description, you can find some features in all images and align them, stitch them together, or do whatever you want.

Harris Corner Detection

Now understand the image detection, we saw that corners are regions in the image with large variations in intensity in all the directions. One early attempt to find these corners was done by Chris Harris & Mike Stephens in their paper A Combined Corner and Edge Detector in 1988, so now it is called the Harris Corner Detector. He took this simple idea to a mathematical form. It basically finds the difference in intensity for a displacement of(u,v)

in all directions.

Fig: formula edge detection 

The window function is either a rectangular window or a Gaussian window which gives weights to pixels underneath.

We have to maximize this function

E(u,v) for corner detection. That means we have to maximize the second term. Applying Taylor Expansion to the above equation and using some mathematical steps we get the final 

Here,Ix and Iy are image derivatives in x and y directions respectively. 

Then comes the main part. After this, they created a score, basically an equation, which determines if a window can contain a corner or not.

R=det(M)−k(trace(M))^2

where

  • det(M)=λ1λ2
  • trace(M)=λ1+λ2
  • λ1 and λ2 are the eigenvalues of M

So the magnitudes of these eigenvalues decide whether a region is a corner, an edge, or flat.

So the result of Harris Corner Detection is a grayscale image with these scores. Thresholding for a suitable score gives you the corners in the image. We will do it with a simple image.

Harris Corner Detector in OpenCV

OpenCV has the function cv.cornerHarris() for this purpose. Its arguments are:

  • img – Input image. It should be grayscale and float32 type.
  • blockSize – It is the size of neighbourhood considered for corner detection
  • ksize – Aperture parameter of the Sobel derivative used.
  • k – Harris detector free parameter in the equation.
#import required library
import cv2 as cv
import numpy as np
#read image
filename = "chess_board.jpeg"
img = cv.imread(filename)
#Convert color BGR to GRAY
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
gray = np.float32(gray)
dist = cv.cornerHarris(gray,2, 3, 0.04)
##result is dilated for marking the corners, not important
dist = cv.dilate(dist,None)
# Threshold for an optimal value, it may vary depending on the image.
img[dist>0.01*dist.max()] = [0,0,255]
cv.imshow('dst',img)
if cv.waitKey(0) & 0xff == 27:
	cv.destroyAllWindows()

LEAVE A REPLY

Please enter your comment!
Please enter your name here