In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about big data's image segmentation based on adaptive significance. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
Preface
Usually, when we see a picture, we focus on a focus in the picture. This could be a person, a building or even a bucket. Although the other areas without focus are clear, they attract little attention because of their monotonous color or smooth texture. When we encounter this kind of image, we want to segment the object of interest from the image. The following is an example of a significant image and the following discusses the segmentation method of this kind of significant image also known as significant image segmentation.
An example of a remarkable image. Buckets (left) and people (right) are objects of interest
The segmentation method originated from the hope of finding the Trimap in the image independently. Trimap is an image mask (mask) that, when used in conjunction with the mask algorithm, can be used to segment an image and prompt for details between the foreground and background. Trimap usually includes white areas that define the foreground, black areas that define the background, and gray areas that represent uncertain areas. The specific form is shown in the following figure.
Trimap example
The problem with most matting algorithms is that they want Trimap to be provided by users, which is a very time-consuming task. This paper introduces two related papers trying to solve the problem of autonomous trimap generation, which are given at the end of the paper. A fairly simple and easy-to-implement method was used in the first paper. Unfortunately, their approach is not entirely autonomous because it requires the user to provide a rectangular area for the Grabcut algorithm. In the second paper, the significant method is used to predict the regions of interest. However their saliency methods are very complex and combine the results of three different saliency algorithms. One of these three algorithms uses convolution neural network, which should be avoided as far as possible in order to be easy to implement.
If we ignore the need to artificially give a rectangular region, a better segmentation result can be produced in the first paper. Through the principle of the second paper to automatically give a rectangular region of the Grabcut algorithm, then the problem of autonomous segmentation will be solved perfectly.
Method
For most forms of image segmentation, the goal is to binarize the image into the region of interest. The goal of the method introduced in this article is the same. First, roughly determine where the person you are interested in is. Gaussian blur is applied to the image, and then a superpixel with an average size of 15 pixels is generated in the blurred image. The super-pixel algorithm aims to decompose the image according to the color and distance of the values in the pixel region. Specifically, a simple linear iterative clustering (SLIC) algorithm is used. The specific form is shown in the following figure.
The result of a bucket and a person's super-pixel division.
Superpixels decompose the image into roughly the same areas. One advantage of this is that superpixels allow region generalization. We can assume that most pixels within the superpixel have similar attributes.
The significance map of the image is calculated while determining the superpixels in the image. Two different saliency techniques are used. The first method uses OpenCV's built-in method called fine particle saliency. The second method involves obtaining the average value of the fine particle saliency image then subtracting the average value from the Gaussian blur version of the image and then the absolute value of the new image.
The images below highlight the areas of interest. The image produced by the significance of fine particles is softer. In addition, the fine particle saliency image mainly outlines the boundary of the prominent image. Although the other method also captures the interior of the prominent image, this method produces more noise than the fine particle method. After that, the noise needs to be removed.
The first significant result
The second significant result
In order to binarize the image, each super pixel generated from the color image is iterated. If the median pixel value of the superpixel region in the significant image is greater than the threshold T1 the whole superpixel will be binarized to white. Otherwise, the entire superpixel remains black. T1 is selected by the user. In general, T1 is set to 25% of the maximum pixel value in the significant image. 30%.
After binarization the image is expanded based on the saliency technology used. In the first method, the image is enlarged to twice the average superpixel size. The expansion is not carried out in the second method, because the larger noise in the image increases the risk of expansion. The results of the processing are given below.
The final step depends on which saliency is used. In the results of both methods, the largest white pixel region is extracted. Do this by finding the outline in the image and selecting the outline with the largest area, and then fit the bounding box to the selected area.
Based on the general results the first saliency method usually leads to regional fragmentation. After the bounding box is generated, all other white areas that fall into the box that do not belong to the largest area are added to the box. The boundary of the box is increased to include these areas. The second saliency method does not need to do this. Typically, the maximum number of areas acquired will exceed the expected number.
The final step is to provide the finally found bounding box to the Grabcut algorithm. Grabcut is a common method for segmenting images, which separates the content that is definitely the background from the foreground. Here we directly use OpenCV's built-in Grabcut function. The result of the process is shown below.
Result
The two significance calculation methods will have some influence on the results. The first saliency method is more suitable for images with noise and will not cause spillover of segmentation results like the second saliency method. But if the image is too long or has tendrils, these parts are usually disconnected from the rest of the image
The following is an example result of segmenting more images with these two methods.
The above is what the adaptive significance-based image segmentation in big data is like. If you happen to have similar doubts you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.