In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to analyze AdaNet tools, many novices are not very clear about this, in order to help you solve this problem, the following small series will explain in detail for everyone, there are people who need this can learn, I hope you can gain something.
AdaNet, a lightweight TensorFlow-based framework that automatically learns high-quality models with minimal expert intervention. AdaNet is based on Google's latest reinforcement learning and AutoML based on evolutionary algorithms, enabling fast and flexible model building while providing learning guarantees. Moreover, AdaNet is a general framework that not only learns neural network architecture, but also model integration to get better models.
AdaNet is easy to use, can build high-quality models, helps machine learning practitioners save time in choosing the best neural network architecture, and can adaptively implement methods for learning sub-network ensembles into new network architectures.
AdaNet is able to add subnetworks of different depths and widths to create diverse ensembles, and can easily make tradeoffs between number of parameters and model performance.
AdaNet adaptively grows the set of neural networks. At each iteration, it measures the integration loss of each candidate and selects the best one for the next iteration.
fast and easy to use
AdaNet implements the TensorFlow Estimator interface, which greatly simplifies machine learning programming by encapsulating training, evaluation, prediction, and model derivation. It integrates open source tools such as TensorFlow Hub modules, TensorFlow Model Analysis, and Google Cloud's Hyperparameter Tuner. Support for distributed training significantly reduces training time and scales linearly with available CPUs and accelerators such as GPUs.
AdaNet training results on CIFAR-100, x-axis represents the number of training steps, y-axis represents accuracy. The blue line is accuracy on the training set and the red line is performance on the test set. A new subnetwork is added every million steps, ultimately improving overall performance. The gray and green lines are the precision before adding the new subnet.
AdaNet Learning Guarantees
There are several challenges in building neural network ensembles: What is the best subnet architecture to consider? Do you want to reuse the same architecture or encourage model diversity? While complex subnets with more parameters will tend to perform better on the training set, generalization performance may not be as good due to their greater complexity. These challenges stem from how to evaluate model performance. This can be evaluated by dividing the training set samples, but doing so reduces the number of samples available to train the neural network.
AdaNet's approach (derived from the paper "AdaNet: Adaptive Structural Learning of Artificial Neural Networks" at ICML 2017) is aimed at optimizing the trade-off between the performance of the set on the training set and its ability to generalize to invisible data. Intuitively, this means that a new subnetwork is added only if it improves the overall training loss without affecting its generalization ability.
This learning guarantee means:
1) The generalization error of a set is limited by its training error and the complexity of the model.
2) Minimize this constraint directly by optimizing this objective.
A practical benefit of optimizing this objective is that it does not require a subset of the training sample set to be dedicated to evaluating the candidate subnets selected for addition to the set, and more training data can be used to train the subnets.
To learn more, browse the tutorial on AdaNet optimization goals:
https://github.com/tensorflow/adanet/tree/v0.1.0/adanet/examples/tutorials/adanet_objective.ipynb
User-defined extensions
AdaNet not only provides common model architectures for researchers to use, but also allows users to add their own defined networks. Users can use adanet.subnetwork.Builder to add network architectures defined using TensorFlow APIs (such as tf.layers) to define their own AdaNet.
Users building models with TensorFlow can easily transcode TensorFlow into AdaNet subnets, improve model performance with adanet.Estimator, and gain learning guarantees. AdaNet will explore the search space of candidate subnets they define and learn to integrate subnets.
In this example, an open-source implementation of NASNet-A CIFAR architecture was used, converted into subnets, and the state-of-the-art results of CIFAR-10 were improved after eight AdaNet iterations. Not only does it improve performance but the new model uses fewer parameters.
Users can also use tf.contrib.estimator.Heads to use their own custom loss functions as part of AdaNet objective functions to train regression, classification, and multitask learning problems.
Users can also define the search space for candidate subnets to explore by extending the adanet.subnetwork.Generator class, increasing or decreasing the search space depending on the hardware available. The search space for subnets can be as simple as replicating the same subnet configuration using different random seeds, or training dozens of subnets with different combinations of hyperparameters and letting AdaNet select the subnets to include in the final ensemble model.
Did reading the above help you? If you still want to have further understanding of related knowledge or read more related articles, please pay attention to the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.