Find similarities between two images with Opencv and Python
We have seen in the previous tutorial if two images are completely equal (same size, same channels, and same pixels values).
But what if they’re not equal?
The subtraction method doesn’t work anymore, as we can’t subtract pixels from images that have different sizes, we would get an error.
In this article you will learn how to compare and find similarities between two images when they’re similar but not exactly identical.
For example you can take an Image and compare it with the same image that has different filters applied on.
I took as example the image below (Golden Bridge in San Francisco), then I applied to it different filters or edited them, as you can see in the images below.

These images below are a few examples of the editing that were made to the original picture: blue, blurred, cartoonized, exposured, mixed colors, old photo, overlay, portion of image, rotated, sharpened, sunburst, textured.
Look for image similarities if they’re not equal:
The approach we’re going to use to find similarities if the images are not equal is Feature detection and Feature matching.
We find the features of both images.

On line 19 we load the sift algorithm.
On lines 20 and 21 we find the keypoints and descriptors of the original image and of the image to compare.
# 2) Check for similarities between the 2 images sift = cv2.xfeatures2d.SIFT_create() kp_1, desc_1 = sift.detectAndCompute(original, None) kp_2, desc_2 = sift.detectAndCompute(image_to_compare, None)
On lines 22, 23 and 24 we load FlannBasedMatcher which it the method used to find the matches between the descriptors of the 2 images.
On line 26 we find the matches between the 2 images. We’re storing the matches in the array ‘matches’.
The array will contain all possible matches, so many false matches as well.
]index_params = dict(algorithm=0, trees=5) search_params = dict() flann = cv2.FlannBasedMatcher(index_params, search_params) matches = flann.knnMatch(desc_1, desc_2, k=2)
In this part we apply the ratio test to select only the good matches.
The quality of a match is define by the distance. The distance is a number, and the lower this number is, the more similar the features are.
By applying the ratio test we can decide to take only the matches with lower distance, so higher quality.
If you decrease the ratio value, for example to 0.1 you will get really high quality matches, but the downside is that you will get only few matches.
If you increase it you will get more matches but sometimes many false ones.
good_points = [] ratio = 0.6 for m, n in matches: if m.distance < ratio*n.distance: good_points.append(m) print(len(good_points)) result = cv2.drawMatches(original, kp_1, image_to_compare, kp_2, good_points, None)
In this last part we show all the images on the screen.
cv2.imshow("result", result) cv2.imshow("Original", original) cv2.imshow("Duplicate", image_to_compare) cv2.waitKey(0) cv2.destroyAllWindows()

Hi there, I’m the founder of Pysource.
I’m a Computer Vision Consultant, developer and Course instructor.
I help Companies and Freelancers to easily and efficiently build Computer Vision Software.

Learn to build Computer Vision Software easily and efficiently.
This is a FREE Workshop where I'm going to break down the 4 steps that are necessary to build software to detect and track any object.
Sign UP for FREE