In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "python machine learning GCN graph convolutional neural network principle is what", in daily operation, I believe many people in python machine learning GCN graph convolutional neural network principle is what problem there are doubts, Xiaobian consulted all kinds of information, sorted out simple and easy to use operation method, hope to answer "python machine learning GCN graph convolutional neural network principle is what" doubts help! Next, please follow the small series to learn together!
1. Figure Signal Processing Knowledge
Graph convolutional neural network involves the relevant knowledge of graph signal processing, which is also derived from the knowledge of graph signal processing. Understanding the knowledge of graph signal processing is the basis for understanding graph convolutional neural network.
1.1 Laplacian Matrix of Graphs
Laplacian matrix is an important matrix that reflects the structural correlation of graphs and is an important part of graph convolutional neural networks.
1.1.1 Definition and Examples of Laplacian Matrices
Examples:
According to the above calculation formula, the Laplace matrix can be obtained as:
1.1.2 Regularized Laplacian Matrix
1.1.3 Properties of Laplacian matrices
1.2 Fourier transform on graph
Fourier transform is a method of signal analysis, which can analyze the components of the signal, but also can use these components to synthesize the signal. It converts the signal from time domain to frequency domain, and gives another solution of signal processing from frequency domain perspective. (1) For graph structure, Fourier transform (GFT) on graph can be defined. For any signal x on graph G, its Fourier transform is expressed as:
From the perspective of linear generation, it can be clearly seen that v1,…, vn constitute a set of complete basis vectors in the N-dimensional feature space, and any graph signal in G can be expressed as a linear weighted sum of these basis vectors, and the coefficients are Fourier coefficients on the Fourier basis corresponding to the graph signal.
Returning to the Laplacian matrix mentioned earlier, describing the total variation of smoothness:
It can be seen that the total variation describing the smoothness of a graph is the linear combination of all the eigenvalues of the nodes in the graph, and the weights are the squares of the Fourier coefficients. The condition for the minimum of the total variation is that the eigenvector corresponding to the graph signal and the minimum eigenvalue completely coincide. Combined with the significance of describing the overall smoothness of the graph signal, the eigenvalue can be equivalent to frequency: the lower the eigenvalue, the lower the frequency, the slower the corresponding Fourier basis changes, that is, the signal values of the close nodes tend to be consistent.
All Fourier coefficients of a graph signal are combined and called spectrum, and the frequency domain perspective considers both the signal itself and the structural properties of the graph from a global perspective.
1.3 graph signal filter
Graph filter is to enhance or attenuate the frequency components in the graph. The core of graph filter operator is its frequency response matrix, which brings different filtering effects to the filter.
Therefore, the filter can be divided into low-pass, high-pass and band-pass according to the filtering effect.
Low-pass filter: keep the low-frequency part, pay attention to the smooth part of the signal;
High-pass filter: keep the high-frequency part, pay attention to the violent change part of the signal;
Band-pass filter: retain specific frequency band;
The polynomial extension of the Laplacian matrix forms the graph filter H:
2. Figure Convolutional Neural Networks 2.1 Mathematical Definition
The mathematical definition of graph convolution is:
There is a big problem with the above formula: the learning parameter is N, which involves all nodes of the whole graph, and it is easy to overfit for large-scale data.
Further simplification: replace the trainable parameter matrix with the polynomial expansion of the Laplacian matrix mentioned earlier.
This structure content is defined as graph convolution layer (GCN layer), and the network model obtained by stacking graph convolution layers is graph convolution network GCN.
2.2 Understanding and Time Complexity of GCN
The convolution layer is a maximal simplification of the frequency response matrix, which directly reduces the graph filter to be trained to the normalized Laplacian matrix.
2.3 Advantages and disadvantages of GCN
Advantages: GCN, as the basis of graph neural network in recent years, is very effective in processing graph data. It learns the structural information of graph structure and the attribute information of nodes at the same time to jointly obtain the final node feature representation, considering the structural correlation between nodes, which is very important in graph operation.
Disadvantages: Over-smoothing problem (after multi-layer superposition, the representation vectors of nodes tend to be consistent, and nodes are difficult to distinguish), because GCN has a low-pass filter effect (j aggregates features so that node features continue to fuse), features tend to be the same after multiple iterations.
3. Pytorch code analysis
Pytorch implementation of the GCN layer:
class GraphConvolutionLayer(nn.Module): ''' Convolutions:Lsym*X*W where Lsym denotes the regularized graph Laplacian matrix, X denotes the input feature, W denotes the weight matrix, X'denotes the output feature; * Indicates matrix multiplication ''' def __init__(self, input_dim, output_dim, use_bias=True): #initialization, parameters: input_dim--> input dimension, output_dim--> output dimension, use_bias--> whether to use bias term, boolean super(GraphConvolutionLayer,self).__ init__() self.input_dim=input_dim self.output_dim=output_dim self.use_bias=use_bias #Add bias, default is True self.weight=nn.Parameter(torch.Tensor(input_dim, output_dim))#Weight matrix is trainable parameter if self.use_bias==True: #Add bias self.bias=nn.Parameter(torch.Tensor(output_dim)) else: #Set offset to null self.register_parameter('bias', None) self.reset_parameters() def reset_parameters(self): #Initialize parameters stdv = 1. / math.sqrt(self.weight.size(1)) self.weight.data.uniform_(-stdv, stdv)#Initialize weight Tensor using uniform distribution U(-stdv,stdv) if self.bias is not None: self.bias.data.uniform_(-stdv, stdv) def forward(self, adj, input_feature): #forward propagation, parameters: adj--> adjacency matrix (input is regularized Laplacian matrix), input_future--> input eigenmatrix temp=torch.mm(input_feature, self.weight)#Matrix multiplication to get X*W output_feature=torch.sparse.mm(adj, temp)#Since adj is a sparse matrix, sparse matrix multiplication is used to improve computational efficiency, and Lsym*temp=Lsym*X*W is obtained. if self.use_bias==True: #If bias is set, add bias term output_feature+=self.bias return output_feature
Define a two-layer GCN network model:
class GCN(nn.Module): ''' Define a two-layer GCN network model ''' def __init__(self, input_dim, hidden_dim, output_dim): #initialization, parameters: input_dim--> input dimension, hidden_dim--> hidden layer dimension, output_dim--> output dimension super.__ init__(GCN, self).__ init__() #Define two-layer convolution self.gcn1=GraphConvolutionLayer(input_dim, hidden_dim) self.gcn2=GraphConvolutionLayer(hidden_dim, output_dim) def forward(self, adj, feature): #forward propagation, parameters: adj--> adjacency matrix, feature--> input feature x=F.relu(self.gcn1(adj, feature)) x=self.gcn2(adj, x) return F.log_softmax(x, dim=1) At this point, the study of "Python machine learning GCN graph convolutional neural network" is over, hoping to solve everyone's doubts. Theory and practice can better match to help you learn, go and try it! If you want to continue learning more relevant knowledge, please continue to pay attention to the website, Xiaobian will continue to strive to bring more practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.