In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly shows you "how to use Matlab simulation to achieve image smoke recognition", the content is easy to understand, clear, hope to help you solve your doubts, the following let Xiaobian lead you to study and learn "how to use Matlab simulation to achieve image smoke recognition" this article.
1. Brief introduction of algorithm 1.1 c-means clustering algorithm
Clustering analysis is to group data objects according to the information that describes the objects and their relationships found in the data. The goal is to make the objects in the group similar (related) to each other, while the objects in different groups are different (unrelated). The greater the intra-group similarity and the greater the gap between groups, the better the clustering effect.
In other words, the goal of clustering is to get higher intra-class similarity and lower inter-class similarity, so that the distance between classes is as large as possible, and the distance between intra-class samples and class centers is as small as possible. Here, we choose k-means clustering algorithm.
1. 2 LBP algorithm
LBP (Local Binary Pattern, Local binary Mode) is an operator used to describe local texture features of an image, and it has significant advantages such as rotation invariance and gray invariance. It was first proposed by T. Ojala, M.Pietik ä inen, and D. Harwood in 1994. It is used for texture feature extraction. The extracted feature is the local texture feature of the image.
The original LBP operator is defined as comparing the gray value of the adjacent 8 pixels with the threshold value of the central pixel in the window of 3: 3. If the value of the surrounding pixel is greater than the value of the central pixel, the position of the pixel is marked as 1, otherwise it is 0. In this way, 8 points in the neighborhood of 3 ~ 3 can produce 8-bit binary numbers (usually converted to decimal numbers, that is, LBP codes, a total of 256), that is, the LBP value of the central pixel of the window, and this value is used to reflect the texture information of the region.
1.3 PCA algorithm
PCA (Principal Component Analysis), the principal component analysis method, is one of the most widely used data dimensionality reduction algorithms. The algorithm steps are as follows:
1) data centralization-de-averaging, as needed, some need to be normalized-Normalized
2) solve the covariance matrix
3) using eigenvalue decomposition / singular value decomposition to solve eigenvalues and Eigenvectors.
4) sort the eigenvalues from the largest to the smallest, and retain the first k Eigenvectors
5) using eigenvector to construct projection matrix.
6) the reduced-dimensional data is obtained by using the projection matrix.
1.4 SVM algorithm
Support vector machine (support vector machines, SVM) is a two-classification model, its basic model is the linear classifier with the largest interval defined in the feature space, and the interval is the most different from the perceptron; SVM also includes kernel techniques, which makes it essentially a nonlinear classifier. The learning strategy of SVM is to maximize the interval, which can be formalized as a problem of solving convex quadratic programming, which is also equivalent to the problem of minimizing the regularized hinge loss function. The learning algorithm of SVM is the optimization algorithm for solving convex quadratic programming.
The basic idea of SVM learning is to solve the separated hyperplane which can correctly divide the training data set and has the largest geometric interval. The following figure shows the classification hyperplane. For linearly separable data sets, there are infinitely many such hyperplanes (that is, perceptrons), but the classification hyperplane with the largest geometric interval is unique. Figure 1 below-schematic diagram of 1SVM algorithm
Figure 1-schematic diagram of 1SVM algorithm
2. Algorithm realization 2.1 smoke recognition algorithm flow
1) first of all, all images are preprocessed, assuming that there is smoke as a positive sample and no smoke as a negative sample, and the smoke folder of the train set is renamed to the non folder of the pos,train set to neg;. Similarly, the smoke folder of the test set is renamed to the non folder of the pos,test set to neg. In order to process all the pictures, all the canonical naming formats of pos and neg in train and test are 0001.jpg, 0002.jpg, 0003.jpg, 0004.jpg, 0005.jpg. Extract the names of these pictures and store them in "pos_list.txt, neg_list.txt, pos_test_list.txt, neg_test_list.txt text" respectively. Figure 2-1 below is shown in figure 2-2.
Figure 2-1
Figure 2-2
2) the pixels of training set and test set are clustered by c-means clustering algorithm to realize image segmentation.
3) using LBP to extract the features of the segmented training set image and test set image.
4) the principal component analysis (PCA) is used to reduce the feature dimension of the training set and the test set.
5) using the two-dimensional features obtained by reducing the dimension of the training set to train the SVM two-classification model.
6) finally, the two-dimensional features obtained from the dimension reduction of the test set are used for classification and prediction.
The overall algorithm flow is shown in figure 2-3.
Fig. 2-3 flow chart of algorithm
2.2 implementation of c-means algorithm
Image segmentation is to use the gray, color, texture, shape and other features of the image to divide the image into several non-overlapping regions, and make these features show similarity in the same region, and there are obvious differences between different regions. Then the regions with unique properties in the segmented image can be extracted for different research. The basis of image recognition is image segmentation, and its function is to distinguish the objects that reflect the real situation of objects, occupy different regions and have different characteristics, and form digital features. Therefore, this paper uses c-means clustering algorithm to achieve image segmentation and noise filtering. In the process of building a smoke recognition model, first of all, smoke-free and smoky images are segmented by c-means clustering.
In this paper, pixel clustering is carried out on the preprocessed training set and test set images, and the effects of a smoky image and a smokeless image before and after segmentation are compared. As shown in figures 2-4 and 2-5
Figure 2-4 comparison of smokeless image before and after segmentation
Figure 2-5 comparison of smoke images before and after segmentation
2.3 implementation of LBP algorithm
In this paper, LBP algorithm is used to extract the features of the images after pixel clustering (3 categories). In this paper, we enumerate the comparison of the results before and after feature extraction of a smoky image and a smokeless image.
Figure 2-6 comparison of smokeless three-pixel clustering LBP before and after feature extraction
Figure 2-7 comparison before and after LBP feature extraction of smoky three-pixel clustering
In this paper, the PCA algorithm reduces the dimension of the features extracted by HOG or LBP to make the data visual. The PCA algorithm can obtain most of the information of the original features, and the proportion of the information retained by the first k eigenvalues to the original information can be calculated as follows.
Reduce the dimension of the features extracted by LBP algorithm, and take the first two-dimensional features for model training. The information retained by the first two dimensions contains 98.75%, as shown in figure 2-8 below.
2.4 implementation of SVM algorithm
After the above image preprocessing, image pixel clustering, LBP feature extraction, PCA feature reduction to two-dimensional process, the two-dimensional feature vector is used as the input to train the SVM model, and finally the classification accuracy of the model on the training set is obtained.
Using the k-means+LBP+PCA+SVM algorithm, train the model many times, and finally take the average, the classification accuracy is 79% on the training set and 78% on the test set. The following picture shows the classification effect of the model on the training set.
III. Result analysis
After the implementation of the algorithm in the second chapter, we finally get a complete pos two-classification model, which is used to predict the pictures of pos samples and neg samples in SVM. Before prediction, we first need to preprocess the test set images, then use k-means3 clustering method to cluster the pixels to get the final image segmentation clustering map, then extract LBP features from the clustering diagram, and finally use PCA to reduce the dimension of the extracted features. The final two-dimensional feature vector is used as the input of the model to classify and predict, and finally get the result. For LBP feature extraction method, the accuracy of training set and test set is 79% and 78%, respectively. After comparison, it can be found that the generalization performance of the model is good.
Finally, the author has to mention that the reason why the appeal method is adopted to achieve smoke recognition is that large-scale operations must include clustering, classification and dimensionality reduction. The author has also tried to use LBP+SVM directly to achieve smoke recognition, and the accuracy of the test set can reach 93%.
These are two different ways to solve the problem. If the idea of this article is Pipeline, if the idea of directly using LBP+SVM is called end2end, each has its own advantages and disadvantages. Pipeline is to solve a problem into several sub-problems at once, and then string them together. This method is easy to implement, and more flexible and explainable, but the disadvantage is that multiple subtasks will cause error accumulation. End2end regards a problem as a whole, which generally achieves higher performance than pipeline, but the whole is like a black box and can be explained poorly. Now the latest research trend of deep learning is the end2end method.
% main program code clc; clear based on LBP feature extraction; k = 2 ReadList2 acc1 = 0 position acc2 = 0 ReadList2 ACC = 0%% tag to make ReadList1 = textread ('pos_list.txt','%s','delimiter','\ n');% load positive sample list sz1=size (ReadList1); label1=ones (sz1 (1), 1);% positive sample tag ReadList2 = textread ('positive sample\ n') % load negative sample list sz2=size (ReadList2); label2=zeros (sz2 (1), 1);% negative sample tag label_train = [label1',label2'];% training set tag ReadList_pos = textread ('pos_test_list.txt','%s','delimiter','\ n');% load test positive sample list sz_pos=size (ReadList_pos); label_pos=ones (sz_pos (1), 1) % positive sample tag ReadList_neg = textread ('neg_test_list.txt','%s','delimiter','\ n');% load test negative sample list sz_neg=size (ReadList_neg); label_neg=zeros (sz_neg (1), 1);% negative sample tag label_test = [label_pos',label_neg'];% test set error total_trainnum=length (label_train); total_testnum = length (label_test) Data1 = zeros (total_trainnum,256); data2 = zeros (total_testnum,256);% extract feature% read the positive sample of the training set and calculate the lbp feature for i=1:sz1 (1) name=char (ReadList1 (iMagin1)); image1=imread (strcat ('F:\ pattern recognition matlab program\ pattern recognition task\ yanwujiance\ pos\', name)); I=double (image1) / 255; clu_kmeans=imkmeans (iMagazine 3); clu_pic=clu_kmeans/3 Lbps = lbp (clu_pic); data1 (iMagne:) = lbps; end% reads negative samples from the training set and calculates the lbp feature for j=1:sz2 (1) name= char (ReadList2 (jPower1)); image2=imread (strcat ('F:\ pattern recognition matlab program\ pattern recognition task\ yanwujiance\ neg\', name)); I=double (image2) / 255; clu_kmeans=imkmeans (iMagne 3); clu_pic=clu_kmeans/3; lbps = lbp (clu_pic) Data1 (sz1 (1) + jjcent:) = lbps; end% reads the positive sample of the test set and calculates the lbp feature for m=1:sz_pos (1) test_name= char (ReadList_pos (mPower1)); image3=imread (strcat ('F:\ pattern recognition matlab program\ pattern recognition task\ yanwujiance\ test\ pos_test\', test_name)); I=double (image3) / 255; clu_kmeans=imkmeans (iMagne 3); clu_pic=clu_kmeans/3 Lbpst= lbp (clu_pic); data2 (mscore:) = lbpst; end% reads the negative sample of the test set and calculates the lbp feature for n = 1:sz_neg (1) test_name=char (ReadList_neg (nL1)); image4=imread (strcat ('F:\ pattern recognition matlab program\ pattern recognition task\ yanwujiance\ test\ neg_test\', test_name)); I=double (image4) / 255; clu_kmeans=imkmeans (iMagazine 3); clu_pic=clu_kmeans/3 Lbps = lbp (clu_pic); data2 (sz_pos (1) + NMagneur:) = lbpst; endload data1load data2load svmStruct3% data dimensionality reduction [COEFF SCORE latent] = princomp (data1 (:,:));% training set data dimensionality reduction pcaData1 = SCORE (:, 1clu_pic); latent = 100*latent/sum (latent); for I = 1:8latent (iTun1) = latent (item1) + latent (I) endplot (latent (1:8)) % draw the image information contained in the first 8 eigenvalues: x0 = bsxfun (@ minus,data2,mean (data2,1)); pcaData2_sw = x0*COEFF (:,:); pcaData2 = pcaData2_sw (:, 1data2,1);% Evaluation method: cross-validation method [train, test] = crossvalind ('holdOut',label_train);% randomly selected training set test set cp = classperf (label_train) % evaluate classifier performance svmStruct3hog = svmtrain (pcaData1 (train,1:k), label_train (train));% train SVM classifier% use svmtrain to train, get the trained structure svmStruct3hog, use save svmStruct3hog% to save svmStruct3hogcros = svmclassify (svmStruct3hog,pcaData1 (test,1:k)); classperf (cp,cros, test); cp.CorrectRate% test load svmStruct3hogfor i=1:sz_pos (1) classes = svmclassify (svmStruct3,pcaData2 (iMagna:)) The value of% classes is the number of samples correctly classified by if classes==1 acc1=acc1+1;% record endendfor j = sz_pos (1) + 1Ru 1383 classes= svmclassify (svmStruct3,pcaData2 (jcent:)), and the value of% classes is the number of samples correctly classified by if classes~=1 acc2=acc2+1;% record endend acc = acc1+acc2 Fprintf ('accuracy is:% 5.2f%\ n precision, (acc/ (sz_neg (1) + sz_pos (1) * 100);% calculate the accuracy of prediction% lbp feature extraction code function result = lbp (varargin)% image,radius,neighbors,mapping,mode)% Check number of input arguments.error (nargchk); image=varargin {1}; d_image=double (image); if nargin==1 spoints= [- 1-1;-10;-1; 0-1 -0 1; 1-1; 10; 11]; neighbors=8; mapping=0; mode='h';end if (nargin = = 2) & & (length (varargin {2}) = = 1) error ('Input arguments'); end if (nargin > 2) & & (length (varargin {2}) = = 1) radius=varargin {2}; neighbors=varargin {3}; spoints=zeros (neighbors,2);% Angle step. For I = 1:neighbors spoints (iMagazine 1) =-radius*sin ((imam 1) * a); spoints (imem2) = radius*cos ((iMub 1) * a); end if (nargin > = 4) mapping=varargin {4}; if (isstruct (mapping) & & mapping.samples ~ = neighbors) error ('Incompatible mapping'); end else mapping=0 End if (nargin > = 5) mode=varargin {5}; else mode='h'; endend if (nargin > 1) & (length (varargin {2}) > 1) spoints=varargin {2}; neighbors=size (spoints,1); if (nargin > = 3) mapping=varargin {3}; if (isstruct (mapping) & & mapping.samples ~ = neighbors) error ('Incompatible mapping') End else mapping=0; end if (nargin > = 4) mode=varargin {4}; else mode='h'; end end% Determine the dimensions of the input image. [ysize xsize] = size (image); miny=min (spoints (:, 1)); maxy=max (spoints (: 1)); minx=min (spoints (:, 2)); maxx=max (spoints (:, 2)) % Block size, each LBP code is computed within a block of size bsizey*bsizexbsizey=ceil (max (maxy,0))-floor (min (miny,0)) + 1 floor bsizexchangceil (max (maxx,0))-floor (min (minx,0)) + 1;% Coordinates of origin (0L0) in the blockorigy=1-floor (min (miny,0)); origx=1-floor (min (minx,0));% Minimum allowed size for the input image depends% on the radius of the used LBP operator.if (xsize)
< bsizex || ysize < bsizey) error('Too small input image. Should be at least (2*radius+1) x (2*radius+1)');end % Calculate dx and dy;dx = xsize - bsizex;dy = ysize - bsizey; % Fill the center pixel matrix C.C = image(origy:origy+dy,origx:origx+dx);d_C = double(C); bins = 2^neighbors; % Initialize the result matrix with zeros.result=zeros(dy+1,dx+1); %Compute the LBP code image for i = 1:neighbors y = spoints(i,1)+origy; x = spoints(i,2)+origx; % Calculate floors, ceils and rounds for the x and y. fy = floor(y); cy = ceil(y); ry = round(y); fx = floor(x); cx = ceil(x); rx = round(x); % Check if interpolation is needed. if (abs(x - rx) < 1e-6) && (abs(y - ry) < 1e-6) % Interpolation is not needed, use original datatypes N = image(ry:ry+dy,rx:rx+dx); D = N >= C; else% Interpolation needed, use double type images ty = y-fy; tx = x-fx;% Calculate the interpolation weights. W1 = (1-tx) * (1-ty); W2 = tx * (1-ty); W3 = (1-tx) * ty; W4 = tx * ty;% Compute interpolated pixel values N = w1*d_image (fy:fy+dy,fx:fx+dx) + w2*d_image (fy:fy+dy,cx:cx+dx) +. W3*d_image (cy:cy+dy,fx:fx+dx) + w4*d_image (cy:cy+dy,cx:cx+dx); D = N > = diterc; end% Update the result matrix. V = 2 ^ (iMei 1); result = result + vicious Destiny end% Apply mapping if it is definedif isstruct (mapping) bins = mapping.num; for I = 1:size (result,1) for j = 1:size (result,2) result (iMague j) = mapping.table (result (iMagol j) + 1) End endend if (strcmp (mode,'h') | | strcmp (mode,'hist') | | strcmp (mode,'nh'))% Return with LBP histogram if mode equals' hist'. Result=hist (result (:), 0: (bins-1)); if (strcmp (mode,'nh')) result=result/sum (result); endelse% Otherwise return a matrix of unsigned integers if ((bins-1))
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.