Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to detect and match feature points in Python OpenCV

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

How to detect and match feature points in Python OpenCV? in order to solve this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

Background

Extracting feature points of an image is a key task in the image field. whether in the traditional or in the field of deep learning, features represent the information of the image, which is very important for classification and detection tasks.

Some scenarios where feature points are applied:

Image search: search the map with the map (e-commerce, education)

Image stitching: panoramic shooting (associated image stitching)

Jigsaw puzzle: game field

1. Harris Corner

There are three main situations of Harris corner detection:

Smooth area: no matter which direction you move, the measurement coefficient remains the same.

Edge area: when the vertical edge moves, the measurement coefficient changes strongly

Corner area: no matter which direction you move, the measurement coefficient changes strongly.

Function prototype:

CornerHarris (img,blockSize,ksize,k)

BlockSize: detect window size

K: weight coefficient, generally between 0.02 and 0.04

Code case:

Img = cv2.imread ('chess.png') gray = cv2.cvtColor (img, cv2.COLOR_BGR2GRAY) dst = cv2.cornerHarris (gray, 2,3,0.04) img [dst > 0.01*dst.max ()] = (0,0,255) cv2.imshow (' harris', img) cv2.waitKey (0)

2. Shi-Tomasi corner detection

Description: it is an improvement of Harris corner detection. You need to know the empirical value of k in Harris, but not in Shi-Tomasi.

Function prototype:

GoodFeaturesToTrack (img, … )

MaxCorners: the maximum number of corners. A value of 0 indicates all

QualityLevel: the mass of the corner, generally between 0.01 and 0.1 (lower than filtered out)

MinDistance: minimum Euclidean distance between corners, ignoring points less than this distance

Mask: region of interest

UseHarrisDetector: whether to use the Harris algorithm (default is false)

Code case:

Img = cv2.imread ('chess.png') gray = cv2.cvtColor (img, cv2.COLOR_BGR2GRAY) dst = cv2.goodFeaturesToTrack (gray, 1000, 0.01,10) dst = np.int0 (dst) # is actually np.int64for i in dst: X, y = i.ravel () # the array is reduced to an one-dimensional array (inplace) cv2.circle (img, (x, y), 3, (0,0,255)) -1) cv2.imshow ('harris', img) cv2.waitKey (0)

Essentially the same as Harris corner detection, the effect will be better, and the number of corner points will be more.

III. Key points of SIFT

Chinese translation: feature transformation independent of scaling

Description: Harris corner detection has rotation invariance, that is, rotating the image will not affect the detection effect; but it does not have scale invariance, scaling size will affect the effect of corner detection; SIFT has the property of scale invariance

Implementation steps:

Create SIFT object-detect (sift.detect)-draw key points (drawKeypoints)

Code case:

Img = cv2.imread ('chess.png') gray = cv2.cvtColor (img, cv2.COLOR_BGR2GRAY) sift = cv2.xfeatures2d.SIFT_create () kp = sift.detect (gray, None) # the second parameter is mask region cv2.drawKeypoints (gray, kp, img) cv2.imshow (' sift', img) cv2.waitKey (0)

4. SIFT descriptor

First of all, we need to make it clear that key points and descriptors are two concepts.

Key points: position, size, and orientation

Key point descriptor: a set of vector values of pixels around a key point that contribute to it, which are not affected by affine transformation, illumination transformation, etc. The function of the descriptor is for feature matching

A function that calculates both keys and descriptors (mainly used):

DetectAndCompute (img, … )

Code case:

Img = cv2.imread ('chess.png') gray = cv2.cvtColor (img, cv2.COLOR_BGR2GRAY) sift = cv2.xfeatures2d.SIFT_create () kp, dst = sift.detectAndCompute (gray, None) # the second parameter is the mask region

The resulting dst is the information of the descriptor

5. SURF

Accelerated robust feature detection

Explanation: the biggest disadvantage of SIFT is that it is slow, so it has SURF (high speed).

The implementation steps are the same as SIFT, and the code is as follows:

Surf = cv2.xfeatures2d.SURF_create () kp, dst = surf.detectAndCompute (gray, None) # the second parameter is mask region cv2.drawKeypoints (gray, kp, img)

Due to the high version of opencv-contrib installed (there is a copyright problem), this feature is no longer supported and will not be shown here

VI. ORB

Explanation: the biggest advantage is to achieve real-time detection, the disadvantage is the lack of a lot of information (reduced accuracy)

It is mainly a combination of two technologies: FAST (real-time detection of feature points) + BRIEE (fast descriptor establishment to reduce feature matching time)

The procedure is the same as the previous SIFT, and the code is as follows:

Img = cv2.imread ('chess.png') gray = cv2.cvtColor (img, cv2.COLOR_BGR2GRAY) orb = cv2.ORB_create () kp, dst = orb.detectAndCompute (gray, None) # the second parameter is mask region cv2.drawKeypoints (gray, kp, img) cv2.imshow (' orb', img) cv2.waitKey (0)

As you can see, compared with SIFT and SURF, there are fewer key points, but the speed has been greatly improved.

7. Violence feature matching (BF)

Matching principle: similar to the exhaustive matching mechanism, the descriptors of each feature in the first group are matched with those in the second group, the similarity is calculated, and the nearest matching item is returned.

Implementation steps:

Create matcher: BFMatcher (normType,crossCheck)

Feature matching: bf.match (des1,des2)

Draw match points: cv2.drawMatches (img1,kp1,img2,kp2)

Code case:

Img1 = cv2.imread ('opencv_search.png') img2 = cv2.imread (' opencv_orig.png') G1 = cv2.cvtColor (img1, cv2.COLOR_BGR2GRAY) G2 = cv2.cvtColor (img2, cv2.COLOR_BGR2GRAY) sift = cv2.SIFT_create () kp1, dst1 = sift.detectAndCompute (G1, None) # the second parameter is mask region kp2, dst2 = sift.detectAndCompute (G2) None) # the second parameter is mask region bf = cv2.BFMatcher_create (cv2.NORM_L1) match = bf.match (dst1, dst2) img3 = cv2.drawMatches (img1, kp1, img2, kp2, match, None) cv2.imshow ('result', img3) cv2.waitKey (0)

As can be seen from the above picture, the matching effect is good, with only one feature point matching error.

8. FLANN feature matching

Advantages: FLANN is faster when performing batch feature matching

Disadvantages: due to the proximity of approximate values when used, all accuracy is poor

The implementation steps are consistent with the brute force matching method, and the code is as follows:

Img1 = cv2.imread ('opencv_search.png') img2 = cv2.imread (' opencv_orig.png') G1 = cv2.cvtColor (img1, cv2.COLOR_BGR2GRAY) G2 = cv2.cvtColor (img2, cv2.COLOR_BGR2GRAY) sift = cv2.SIFT_create () kp1, dst1 = sift.detectAndCompute (G1, None) # the second parameter is mask region kp2, dst2 = sift.detectAndCompute (G2, None) # the second parameter is mask region index_params = dict (algorithm = 1) Trees = 5) search_params = dict (checks=50) flann = cv2.FlannBasedMatcher (index_params, search_params) matchs = flann.knnMatch (dst1, dst2, kryp2) good = [] for I, (m, n) in enumerate (matchs): if m.distance

< 0.7 * n.distance: good.append(m)img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, [good], None)cv2.imshow('result', img3)cv2.waitKey(0) 上图可以看出,匹配的特征点数量相比暴力匹配明显变少了,但速度会快很多; 九、图像查找 实现原理:特征匹配 + 单应性矩阵; 单应性矩阵原理介绍: 上图中表示从两个不同角度对原图的拍摄,其中H为单应性矩阵,可通过该矩阵将图像进行转换; 下面使用两个函数实现图像查找的功能: findHomography():获得单应性矩阵; perspectivveTransform():仿射变换函数; 代码实现如下: img1 = cv2.imread('opencv_search.png')img2 = cv2.imread('opencv_orig.png')g1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)g2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)sift = cv2.SIFT_create()kp1, dst1 = sift.detectAndCompute(g1, None) # 第二个参数为mask区域kp2, dst2 = sift.detectAndCompute(g2, None) # 第二个参数为mask区域index_params = dict(algorithm = 1, trees = 5)search_params = dict(checks=50)flann = cv2.FlannBasedMatcher(index_params, search_params)matchs = flann.knnMatch(dst1, dst2, k=2)good = []for i, (m, n) in enumerate(matchs): if m.distance < 0.7 * n.distance: good.append(m)if len(good) >

= 4: # get an array of source and target points srcPts = np.float32 ([KP1 [m. QueryIdx] .pt for m in good]). Reshape (- 1,2) dstPts = np.float32 ([kp2 [m.queryIdx] .pt for m in good]. Reshape (- 1,2) # obtain the homography matrix H, _ = cv2.findHomography (srcPts, dstPts, cv2.RANSAC, 5.0h) W = img1.shape [: 2] pts = np.float32 ([[0mai0], [0, hmur1], [wmur1, hmur1]). Reshape (- 1,1,2) # carries on the radiative transformation dst = cv2.perspectiveTransform (pts, H) # to draw the found region cv2.polylines (img2, [np.int32 (dst)], True, (0Pie0) Else: print ('good must more then 4.') Exit () img3 = cv2.drawMatchesKnn (img1, kp1, img2, kp2, [good], None) cv2.imshow ('result', img3) cv2.waitKey (0)

What Python is mainly used to do? Python is mainly used in: 1, Web development; 2, data science research; 3, web crawler; 4, embedded application development; 5, game development; 6, desktop application development.

This is the answer to the question about how to detect and match feature points in Python OpenCV. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report