In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about how to use OpenCV to quickly find image differences. The article is rich in content and analyzes and describes for you from a professional point of view. I hope you can get something after reading this article.
How to use structural similarity Index (SSIM) to compare two images with Python.
Using this method, we can easily determine whether the two images are the same or different due to slight image processing, compression artifacts or purposeful tampering.
Today we will extend the SSIM method so that we can use OpenCV and Python to visualize the differences between images. Specifically, we will draw a bounding box around the areas in two different input images.
Image differences with OpenCV and Python
In order to calculate the difference between the two images, we will use the structural similarity index, which is first introduced by Wang et al. In the 2004 paper, image quality assessment: from error visibility to structural similarity. This method has been implemented in the scikit-image library for image processing.
The trick is to learn how to accurately determine the location of image differences based on (xrey)-coordinate positions.
To do this, we first need to make sure that our system has Python,OpenCV,scikit-image and imutils.
You can use my OpenCV installation tutorial to learn how to configure and install Python and OpenCV on your system.
If you do not already have scikit-the image is installed / upgraded, upgrade in the following ways:
$pip install-upgrade scikit-image
While you are here, continue to install / upgrade imutils:
$pip install-upgrade imutils
Now that our system has prepared the prerequisites, let's continue.
Calculate the image difference
Can you find the difference between the two images?
Figure 1: manually checking for differences between two input images (source)
If you spend a second studying these two credit cards, you will notice that the MasterCard logo appears on the left image, but has been removed from the right image.
You may have noticed the difference immediately, or it may have taken a few seconds. Either way, this proves that an important aspect of comparing image differences-sometimes subtle-is so subtle that it is difficult for the naked eye to understand the difference immediately (we will see an example of such an image later in this post).
So why is it so important to calculate image differences?
One example is phishing. Attackers can manipulate images slightly to trick unsuspecting users who do not authenticate URL into thinking they are logging on to their bank website-which turns out to be a hoax.
Comparing the logos and known user interface (UI) elements on a web page with existing data sets can help reduce phishing attacks (thank you very much for Chris Cleveland passing PhishZoo: phishing sites are detected by viewing phishing sites as an example of applying computer vision prevention).
The development of phishing detection system is obviously much more complex than simple image differences, but we can still apply these techniques to determine whether a given image has been manipulated.
Now, let's calculate the difference between the two images and use OpenCV,scikit-image and Python to see the difference side by side.
Open a new file and name it image_diff .py, and insert the following code:
# import the necessary packagesfrom skimage.measure import compare_ssimimport argparseimport imutilsimport cv2 # construct the argument parse and parse the argumentsap = argparse.ArgumentParser () ap.add_argument ("- f", "--first", required=True, help= "first input image") ap.add_argument ("- s", "--second", required=True Help= "second") args = vars (ap.parse_args ()) # load the two input imagesimageA = cv2.imread (args ["first"]) imageB = cv2.imread (args ["second"]) # convert the images to grayscalegrayA = cv2.cvtColor (imageA, cv2.COLOR_BGR2GRAY) grayB = cv2.cvtColor (imageB, cv2.COLOR_BGR2GRAY) # compute the Structural Similarity Index (SSIM) between the two# images, ensuring that the difference image is returned (score, diff) = compare_ssim (grayA, grayB) Full=True) diff = (diff * 255) .astype ("uint8") print ("SSIM: {}" .format (score)) # threshold the difference image, followed by finding contours to# obtain the regions of the two input images that differthresh = cv2.threshold (diff, 0,255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU) [1] cnts = cv2.findContours (thresh.copy (), cv2.RETR_EXTERNAL Cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours (cnts) # loop over the contoursfor c in cnts: # compute the bounding box of the contour and then draw the # bounding box on both input images to represent where the two # images differ (x, y, w, h) = cv2.boundingRect (c) cv2.rectangle (imageA, (x, y), (x + w, y + h), (0,0,255), 2) cv2.rectangle (imageB, (x, y), (x + w, y + h) (0,0,255), 2) # show the output imagescv2.imshow ("Original", imageA) cv2.imshow ("Modified", imageB) cv2.imshow ("Diff", diff) cv2.imshow ("Thresh", thresh) cv2.waitKey (0)
Lines 2-5 show our imports. We will use compare_ssim (from scikit-image), argparse, imutils, and cv2 (OpenCV).
We set up two command-line arguments,-first and-second, which are the paths of the two corresponding input images we want to compare (lines 8-13).
Next, we will load each image from disk and convert it to grayscale:
We load our first and second images-first and second, online 16 and 17, which are stored as imageA and imageB respectively.
Then we convert each to grayscale on lines 20 and 21.
Next, let's calculate the structural similarity index (SSIM) between two grayscale images.
Using the compare_ssim function in scikit-image, we calculate the score and the difference image diff (line 25).
The score represents a structural similarity index between the two input images. The value can fall within the range of [- 1], and a value of 1 is a "perfect match".
The difference of the image contains the difference of the actual image, and we want to visualize the difference between the two input images. The differential image is currently represented as a floating-point data type in the range of [0255], so we first convert the array to an 8-bit unsigned integer in the range of [0255] (line 26) before we can further process it using OpenCV.
Now, let's find the outline so that we can place a rectangle around the area marked "different":
On lines 31 and 32, we use cv2 to threshold our differential images. THRESH_BINARY_INV and cv2. THRESH_OTSU-use the'or 'symbol of the vertical bar, | apply both settings. For more information about the Otsu bimodal threshold setting, see this OpenCV document.
Then we find the outline of thresh on lines 33-35. The ternary operator on line 35 simply adapts to the differences between cv2.findContours return signatures in various versions of OpenCV.
The image in figure 4 below clearly shows the ROI of the image that has been manipulated:
Figure 4: using threshold processing to highlight image differences using OpenCV and Python.
Starting at line 38, we spread all over our outline, cnts. First, we use cv2 to calculate the bounding box around the profile. BoundingRect function. We store the relevant (xQuery y) coordinates as x and y and the width / height of the rectangle as w and h.
Then we use these values to draw a red rectangle with cv2 on each image. Rectangle (lines 43 and 44).
Finally, we display the comparison image with the difference image, the difference image and the threshold image (lines 47-50).
Let's call cv2. WaitKey is on the 50 line until a key is pressed, which makes the program wait (at which point the script exits).
Next, let's run the script and visualize some image differences.
Visual image difference
Using this script and the following command, we can quickly and easily highlight the difference between the two images:
$python image_diff.py-first images/original_02.png-second images/modified_02.png
As you can see in figure 6, both the security chip and the name of the account holder have been deleted:
Let's try another example of calculating image differences, this time with a check written by President Gerald R. Ford (source).
By running the following command and providing the relevant images, we can see that the difference here is more subtle:
$python image_diff.py-first images/original_03.png-second images/modified_03.png
Notice the following changes in figure 7:
Betty Ford's name was deleted.
The check number has been deleted.
The symbol next to the date will be deleted.
Last name has been deleted.
In complex images such as inspection, it is often difficult to find all the differences with the naked eye. Fortunately, we can now use this convenient script made by Python,OpenCV and scikit-image to easily calculate the differences and visualize the results.
The above is the editor for you to share how to use OpenCV to quickly find image differences, if you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.