Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use OpenCV to realize face replacement function in C #

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

It is believed that many inexperienced people have no idea about how to use OpenCV to achieve face replacement function in C#. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

Image acquisition

To solve this problem in C #, we will use the Accord library, OpenCvSharp3, and DLib. The Accord library is ideal for creating computer vision applications. OpenCvSharp3 is a C #-based OpenCV library, and we will use several image conversion functions in this library. In the world of computer vision, DLib is the preferred library for face detection. Although DLib is written entirely in C + +, DlibDotNet encapsulates all programs in C #.

We first need to get an original selfie and solo photo of Bradley:

Original selfie

Single photo

Note: you can use the following code to exchange faces with anyone in the selfie, but it works best to replace Bradley Cooper in the above two images, because the two people have the same direction of sight and have a high degree of facial similarity.

Boundary point detection

Next we will use the Dlib library to detect human faces. The Dlib facial detector can identify 68 boundary points covering the face, chin, eyebrows, nose, eyes and lips. These marked points are predetermined and given specific labels, as shown in the following figure.

Dlib runs very fast, and the computational overhead of calculating all these points is only 1ms! So it can also track these points in real time. The following C # code is used to detect all the boundary points on the face in the picture:

/ Process the original selfie and produce the face-swapped image./// The original selfie image./// The new face to insert into the selfie./// A new image with faces swapped.private Bitmap ProcessImage (Bitmap image Bitmap newImage) {/ / set up Dlib facedetectors and shapedetectors using (var fd = FrontalFaceDetector.GetFrontalFaceDetector ()) using (var sp = new ShapePredictor ("shape_predictor_68_face_landmarks.dat")) {/ / convert image to dlib format var img = image.ToArray2D () / / find bradley's faces in image var faces = fd.Detect (img); var bradley = faces [0]; / / get bradley's landmark points var bradleyShape = sp.Detect (img, bradley) Var bradleyPoints = (from i in Enumerable.Range (0, (int) bradleyShape.Parts) let p = bradleyShape.GetPart ((uint) I) select new OpenCvSharp.Point (p.X, p.Y)) .ToArray (); / / remainder of code goes here... }}

Detection result of boundary mark

In this code, we first instantiate FrontalFaceDetector and ShapePredictor. To this end, friends need to pay attention to the following two questions:

In Dlib, detecting faces and detecting boundary points (or "detection shapes") are two different things, and their performance is very different. The speed of face detection is very slow, while shape detection only takes about 1 millisecond and can be carried out in real time.

ShapePredictor is actually a machine learning model loaded from the data file that completed the training. We can also retrain ShapePredictor with any object we like, such as human face, cat and dog face, plant and so on.

Next, the picture format used by Dlib is different from that used by the NET framework, so I need to convert the selfie format before running the above code. The ToArray2D method can convert the bitmap into an array RgbPixel structure, which can be used in Dlib.

After completing the image format conversion, we use Detect () to detect all the faces in the image. We selected Bradley Cooper's face for follow-up use, which happened to be faces (0) in this test. We also use a rectangle to identify the position of Bradley's face in the picture.

Next, we call Detect () on ShapePredictor and provide a selfie and a face rectangle to identify the location. The return value of this function is the class of the GetPart () method, which we can use to retrieve the coordinates of all the boundary points.

Our subsequent face exchange work will be done on OpenCV, and OpenCV has its own specific pointer structure, so we convert Dlib points to OpenCV points at the end of the code.

Convex hull extraction

Next, we need to calculate the convex hull of the boundary mark. A simple expression is that the outermost point of the link forms a smooth boundary around the face.

The built-in function of OpenCV can help us calculate the convex hull:

/ / get convex hull of bradley's pointsvar hull = Cv2.ConvexHullIndices (bradleyPoints); var bradleyHull = from i in hull select bradleyPoints [I]; / / the remaining code goes here...

The ConvesHullIndices () method calculates the exponents of all convex bounding points, so all we need to do is run a LINQ query to get an enumeration of these boundary points for Bradley Cooper.

The following picture shows the bump appearance on Bradley's face.

After completing the above, we need to repeat these steps for the face in the single photo:

/ / find landmark points in face to swapvar imgMark = newImage.ToArray2D (); var faces2 = fd.Detect (imgMark); var mark = faces2 [0]; var markShape = sp.Detect (imgMark, mark); var markPoints = (from i in Enumerable.Range (0, (int) markShape.Parts) let p = markShape.GetPart ((uint) I) select new OpenCvSharp.Point (p.X, p.Y). ToArray () / / get convex hull of mark's pointsvar hull2 = Cv2.ConvexHullIndices (bradleyPoints); var markHull = from i in hull2 select markPoints [I]; / / the remaining code goes here...

The code here is exactly the same, except that newImage is replaced with image. The following is the appearance of the convex hull detected from a single photo.

So far, we have obtained two convex hull appearance, the first is the bump appearance of Bradley's face, and the second is the appearance of a single photo.

Delaunay triangle deformation

There is no linear relationship between the single photo and Bradley's convex point coordinates. If we try to move all pixels directly, we must use slow nonlinear transformations. However, by first covering Bradley's face in the Delaunay triangle, and then deforming each triangle separately, the whole operation will become linear (and fast! ).

So we will calculate the Delaunay triangle for two people's faces. After getting the triangles in the single photo, they are deformed to make them match Bradley's face exactly.

Delaunay Triangulation is a process of creating a triangular mesh that completely covers Bradley's face, each triangle consisting of three specific boundary points on the convex hull. As a result, the blue line forms the Delaunay triangle:

Next, we will deform the Delaunay triangle in the single photo to keep it consistent with each triangle on Bradley's face to make the new face more suitable for the selfie. Each triangle distortion in this process is a linear transformation, so ultra-fast linear matrix operations can be used to move pixels within each triangle.

In the following figure, we distort the Delaunay triangle consisting of boundary points 3, 14, and 24 in the solo photo to fit Bradley's face, and these three points exactly match Bradley's 3, 14, and 24 boundary punctuation:

The code to perform Delaunay triangulation and deformation in C# is as follows:

/ / calculate Delaunay trianglesvar triangles = Utility.GetDelaunayTriangles (bradleyHull); / / get transformations to warp the new face onto Bradley's facevar warps = Utility.GetWarps (markHull, bradleyHull, triangles); / / apply the warps to the new face to prep it for insertion into the main imagevar warpedImg = Utility.ApplyWarps (newImage, image.Width, image.Height, warps); / / the remaining code goes here...

We use a convenient class Utility, which contains the GetDelaunayTriangles method for calculating triangles, the GetWarps method for calculating the warping of each triangle, and the ApplyWarps method for matching the face of a single person with Bradley's facial convex hull.

Now, the face in the solo photo has been represented by warpedImg, and fully deformed to match Bradley:

Color conversion

Solo photo and Bradley's convex point

There is one more thing we need to deal with. The skin color of the character in the solo photo is not the same as that of Bradley. So, if I just put the image on top of it in the selfie, we will see a dramatic color change at the edge of the image:

To solve this problem, we will use a function in OpenCV, SeamlessClone, which seamlessly fuses one image into another and eliminates any color differences.

This is the method of seamless cloning in C#:

/ / prepare a mask for the warped imagevar mask = new Mat (image.Height, image.Width, MatType.CV_8UC3); mask.SetTo (0); Cv2.FillConvexPoly (mask, bradleyHull, new Scalar (255,255,255), LineTypes.Link8); / / find the center of the warped facevar r = Cv2.BoundingRect (bradleyHull); var center = new OpenCvSharp.Point (r.Left + r.Width / 2, r.Top + r.Height / 2); / / blend the warped face into the main imagevar selfie = BitmapConverter.ToMat (image) Var blend = new Mat (selfie.Size (), selfie.Type ()); Cv2.SeamlessClone (warpedImg, selfie, mask, center, blend, SeamlessCloneMethods.NormalClone); / / return the modified main imagereturn BitmapConverter.ToBitmap (blend)

Using the SeamlessClone method requires us to do two things:

First, you need a mask to tell it which pixels to blend. When we get Bradley's facial convex hull, we can use the FillConvexPoly method to calculate the required mask.

The center point should be 100% of the skin color of a single photo, and the farther away the pixel from the center point, the closer the Bradley complexion will be. We get the bounding box of Bradley's face by calling BoundingRect, and then take the center of the box to estimate the location of the center.

Then, I call SeamlessClone to clone and store the result in the blend variable, and the final result is as follows:

Other

Seeing here, friends may be wondering why they need to use convex hulls in this process, rather than directly calculating triangles without using all the boundary points.

The reason is actually very simple. Let's compare Bradley's selfie with a solo photo. Is it not hard to find that one person is smiling while the other is not? If we use all the boundary points directly, the program will try to deform the entire face to match Bradley's lips, nose and eyes. This will open the lips of the person in the single photo so that the character in the single photo smiles and shows his teeth.

But the result doesn't seem to be very good.

If only convex hull points are used, the program can deform the chin of the character in a single photo to match Bradley's jaw line. But it can't handle the character's eyes, nose and mouth. This means that expressions, etc., remain the same in the new image and look more natural.

Finally, we will use the Instagram filter to further eliminate chromatic aberration:

After reading the above, have you mastered how to use OpenCV to achieve face replacement in C #? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report