Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use ANU-Net in Tensorflow

2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces how to use ANU-Net in Tensorflow, the content is very detailed, interested friends can refer to, hope to be helpful to you.

1. Advantages of ANU-Net network

General medical image segmentation research can be divided into two categories: (1) manual and semi-automatic segmentation, (2) fully automatic segmentation. Although a large number of improved models of FCN and Unet have been proposed, most of these methods divide the segmentation task into two parts: location and segmentation. The additional positioning steps increase the parameters of the model and bring extra computing time. In addition, the accuracy of model segmentation depends heavily on the accuracy of the first step positioning. In order to meet the requirement of more accurate segmentation of results, the author proposes ANUNet network, which is based on nested Unet structure and attention mechanism. The main contributions of this paper are as follows: (1), ANUNet is used for medical image segmentation, and (2) in open data experiments, it is shown that the attention mechanism can focus on the target organs of the whole image and suppress unrelated tissues. (3) ANUNet can increase the weight of the target region and suppress the background region which has nothing to do with the segmentation task. (4) ANUNet redesigns the nested UNet structure, integrates different levels of features, and brings higher performance in a variety of medical image segmentation tasks compared with other UNet-based models. (5) due to the introduction of in-depth supervision mechanism, ANUNet has a flexible network structure and can perform pruning operations during testing. Therefore, a large number of parameters in the pruned ANUNet can be greatly reduced and the model can be accelerated at the cost of a little performance reduction.

2. ANU-Net structure

2.1.The difference between the nested UNet model and the nested UNet structure is that the dense hopping connections of different depths are redesigned and the nested convolution module is adopted. Each nested convolution module extracts semantic information through several convolution layers, each convolution layer is connected by a dense jump connection, and the splicing layer can integrate different levels of semantic information. The nested structure has the following advantages (1). The nested structure can learn the important characteristics of different depths by itself, so avoid complex selection of deep and shallow features. (2) the nested structure shares a feature extractor, so there is no need to train a series of Unet, only one encoder. (3) in the nested structure, different levels of features are restored by different independent decoders, so hierarchical decoding templates can be obtained from different levels. 2.2.The attention mechanism adds attention gating to the nested Unet structure, which is shown in the following figure.

Attention gating has two inputs: the up-sampling feature g in the encoder and the corresponding depth feature f in the decoder. G is the gated signal used to enhance the features learned in f. In other words, the gated signal can select more useful features from the encoded features and then take them from the transmission to a higher level encoder. After passing through the convolution layer and the BN layer, the two inputs are added element by element, after the relu function, then through the convolution layer and BN layer, and then through the sigmod function to generate the attention coefficient, and finally multiply the coefficient with the encoder feature elements to get the final output. Note that gating has a good selection function, which can enhance the learning of target areas related to task segmentation while suppressing areas that have nothing to do with tasks. Therefore, our work integrates the "attention gate" into the proposed network to improve and effectively spread semantic information by skipping connections. 2.3. attention nesting UNet--ANUNet integrates attention mechanism and nesting UNet structure network, namely ANUNet. ANUNet uses nested UNet as the basic network framework, and encoders and decoders are arranged symmetrically on both sides of the network. The context information extracted by the encoder is propagated to the decoder of the corresponding layer through dense skip connections, so that more effective layering features can be extracted.

For dense skip connections, the input of each convolution block in the decoder contains two equal-scale feature graphs: (1) the intermediate feature map comes from the output of the front attention gate that skips the connection along the same depth; (2) the final feature graph comes from the output of the deconvolution operation of the deeper module. After receiving and splicing all the feature images, the decoder restores the features in a bottom-up manner. The reason why all previous feature maps accumulate and reach the current block is that dense skip connections can make full use of these feature maps in previously nested convolution blocks in this layer. As shown in the following figure, for example, X (0Magne4) is composed of up-sampled X (1Magne3) and all previous attention outputs, and so on.

The two main innovations of ANUNet are that the network extracts features from the encoder to the decoder through dense skip connections for integrated hierarchical representation. In addition, an attention gate is added between the nested convolution blocks so that the features extracted by different layers can be selectively merged in the decoder path. Therefore, the accuracy of ANUNet has been improved. 2.4. Deep supervision mechanism can alleviate the problem of gradient disappearance and accelerate the speed of convergence. In addition, in-depth supervision can also help the loss function to play the role of regularization. In order to introduce depth monitoring, ANUNet adds 1x1 convolution layer and sigmod activation function after each output block of the first layer. In addition, ANUNet connects these layers directly to the final output to calculate loss and back propagation.

Loss function due to the design of dense skip connections between nested convolution blocks, ANUNet obtains full-resolution feature maps with different semantic levels from the blocks. In order to make full use of these semantic information, the mixed loss function which combines dice coefficient loss (DICE), focal loss (FOCAL) and binary cross entropy loss (BCE) is redesigned. Model trimming as shown in the following figure, ANUNet has four different depth trimming networks of L1, L2, L3, and L4. The grey area means that these modules and attention gates are removed during prediction.

3. Comparison of experimental settings and results.

LiTS and CHAOS datasets are used in data setup and preprocessing, all labeled data are divided into training data and test data according to five to one, and all data are truncated according to Hu value unit range (- 200200) to remove irrelevant useless details. 3.2. evaluation index Dice similarity system, IOU value, accuracy and recall rate to evaluate the performance of the segmentation results. 3.3.The segmentation results of ANUNet were compared with UNet,R2UNet,UNet++,AttentionUNet and AttentionR2UNet.

On how to use ANU-Net in Tensorflow to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report