Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize plane object recognition and Perspective Transformation in opencv3/C++

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to realize plane object recognition and perspective transformation in opencv3/C++". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "how to achieve plane object recognition and perspective transformation in opencv3/C++"!

FindHomography ()

The function findHomography () finds the perspective transformation H between two planes.

Parameter description:

Mat findHomography (InputArray srcPoints, / / coordinate InputArray dstPoints of the midpoint in the original plane, / / coordinate int method of the midpoint in the target plane = 0, / / the method used to calculate the homography matrix double ransacReprojThreshold = 3, OutputArray mask=noArray (), / / the optional output mask set by the robust method (RANSAC or LMEDS) const int maxIters = 2000, / / the maximum number of RANSAC iterations, 2000 is the maximum value it can reach const double confidence = 0.995 / / confidence)

The methods used to calculate the homography matrix are:

0: general method of using all points

RANSAC: a robust method based on RANSAC

LMEDS: minimum median robust method

RHO: a robust method based on PROSAC

Is minimized. If the parameter method is set to the default value of 0, the function uses all point pairs to calculate the initial homography estimate in a simple least square scheme.

However, if not all the points are right

All conform to the rigid perspective transformation (that is, there are some outliers), so this initial estimate will be very poor. In this case, one of the three robust methods can be used. Methods RANSAC,LMeDS and RHO try to use this subset and a simple least square algorithm to estimate each random subset of the homography matrix (each subset has four pairs), and then calculate the quality / goodness of the calculated homography (this is the number of interior points of RANSAC or the median reprojection error of LMeD). Then the optimal subset is used to generate the initial estimation of the homography matrix and the mask of the inner / outer point.

No matter whether the method is robust or not, the calculated homography matrix is further refined by Levenberg-Marquardt method (using inlier only in the case of robust method) to reduce the reprojection error more.

The RANSAC and RHO methods can handle almost any ratio of outliers, but need a threshold to distinguish outliers from outliers. The LMeDS method does not require any threshold, but only works when the internal value exceeds 50%. Finally, if there are no outliers and the noise is fairly small, the default method (method = 0) is used.

PerspectiveTransform ()

The function perspectiveTransform () performs the perspective matrix transformation of the vector.

Parameter description:

Void perspectiveTransform (InputArray src, / / input dual-channel or three-channel floating point array / image OutputArray dst, / / output array / image InputArray m / / 3x3 or 4x4 floating point conversion matrix of the same size and type as src)

Planar object recognition:

# include#includeusing namespace cv;using namespace cv::xfeatures2d;int main () {Mat src1,src2; src1 = imread ("E:/image/image/card.jpg"); src2 = imread ("E:/image/image/cards.jpg"); if (src1.empty () | | src2.empty ()) {printf ("can ont load images....\ n"); return-1;} imshow ("image1", src1); imshow ("image2", src2); int minHessian = 400 / Select SURF feature Ptrdetector = SURF::create (minHessian); std::vectorkeypoints1; std::vectorkeypoints2; Mat descriptor1, descriptor2; / / detect key points and calculate descriptors detector- > detectAndCompute (src1, Mat (), keypoints1, descriptor1); detector- > detectAndCompute (src2, Mat (), keypoints2, descriptor2); / / Flann-based descriptor matcher FlannBasedMatcher matcher; std::vectormatches; / / find the best match for each descriptor from the query set (descriptor1, descriptor2, matches) Double minDist = 1000; double maxDist = 0; for (int I = 0; I

< descriptor1.rows; i++) { double dist = matches[i].distance; printf("%f \n", dist); if (dist >

MaxDist) {maxDist = dist;} if (dist < minDist) {minDist = dist;}} / / the std::vectorgoodMatches; for used by the DMatch class to match the key point descriptor (int I = 0; I < descriptor1.rows; idescriptor +) {double dist = matches.distance; if (dist < max (2*minDist, 0.02)) {goodMatches.push_back (matchesi]);}} Mat matchesImg DrawMatches (src1, keypoints1, src2, keypoints2, goodMatches, matchesImg, Scalar::all (- 1), Scalar::all (- 1), std::vector (), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS); std::vectorpoint1, point2; for (int I = 0; I < goodMatches.size ()) {point1.push_back (keypoints1 [goodMatches1 [goodMatches.queryIdx] .pt); point2.push_back (keypoints2 goodMatches.principIdx] .pt);} Mat H = findHomography (point1, point2, RANSAC) Std::vectorcornerPoints1 (4); std::vectorcornerPoints2 (4); cornerPoints1 [0] = Point (0,0); cornerPoints1 [1] = Point (src1.cols, 0); cornerPoints1 [2] = Point (src1.cols, src1.rows); cornerPoints1 [3] = Point; perspectiveTransform (cornerPoints1, cornerPoints2, H) / / draw the transformed target outline, because the coordinate points of the image src2 are shifted to the right as a whole on the left side: src1.cols line (matchesImg, cornerPoints2 [0] + Point2f (src1.cols, 0), cornerPoints2 [1] + Point2f (src1.cols, 0), Scalar (0255255), 4,8,0); line (matchesImg, cornerPoints2 [1] + Point2f (src1.cols, 0), cornerPoints2 [2] + Point2f (src1.cols, 0), Scalar (0255255), 4,8,0) Line (matchesImg, cornerPoints2 [2] + Point2f (src1.cols, 0), cornerPoints2 [3] + Point2f (src1.cols, 0), Scalar (0255255), 4,8,0); line (matchesImg, cornerPoints2 [3] + Point2f (src1.cols, 0), cornerPoints2 [0] + Point2f (src1.cols, 0), Scalar (0255255), 4,8,0); / / draw the transformed target profile line (src2, cornerPoints2 [0], cornerPoints2 [1], Scalar (0255255), 4,8,0) on the original image Line (src2, cornerPoints2 [1], cornerPoints2 [2], Scalar (0255255), 4,8,0); line (src2, cornerPoints2 [2], cornerPoints2 [3], Scalar (0255255), 4,8,0); line (src2, cornerPoints2 [3], cornerPoints2 [0], Scalar (0255255), 4,8,0); imshow ("output", matchesImg); imshow ("output2", src2); waitKey (); return 0;}

At this point, I believe you have a deeper understanding of "how to achieve plane object recognition and perspective transformation in opencv3/C++". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report