In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you an example analysis of TensorFlow's release of its newly updated TensorFlow 2.4.0-rc4. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
TensorFlow recently released its new update TensorFlow 2.4.0-rc4. TensorFlow Profiler now supports profiling of MultiWorkerMirroredStrategy, which is now a stable API and tracks multiple worker processes using sampling mode API. This strategy allows synchronous distributed training among multiple workers who may have multiple GPU. Some major improvements involve handling peer failures and many bug fixes, which can be found in Keras's multi-employee training. The main refactoring within Keras Functional API has been completed. It improves the reliability, stability and performance of the functional model. The update also adds support for TensorFloat-32 on Ampere-based GPU. TensorFloat-32 (TF32) is the mathematical model of GPU based on NVIDIA Ampere and is enabled by default.
Major change
TF Core:
Because of TensorFloat-32, some float32 operations run on Ampere-based GPU with low precision, including matmul and convolution. For example, the input for such operations is rounded from 23-bit precision to 10-bit precision. In some cases, TensorFloat-32 can also be used for complex64 ops. So TensorFloat-32 can now be disabled.
Many irrelevant API functions have been removed, such as the C-API function in C for string access / modification. Modules that are not part of the TensorFlow public API are hidden.
Tf.keras:
The steps_per_execution parameter is now stable in compile (). It helps to run multiple batches in a single tf.function call, thus improving the performance of TPU or small models with large Python overhead. The internal structure of Keras Functional API has been significantly restructured. This refactoring may affect code that depends on some internal details.
Tf.data:
Tf.data.experimental.service.DispatchServer and tf.data.experimental.service.WorkerServer now use configuration tuples instead of individual parameters. You can do this using tf.data.experimental.service.DispatchServer (dispatcher_config) and tf.data.experimental.service.WorkerServer (worker_config), respectively. This helps to process multiple parameters at the same time.
Tf.distribute:
In the latest update, various built-in API have been renamed with the new features.
Bug fixes and other changes
Calling ops with python constants or NumPy values is consistent with tf.convert_to_tensor behavior. Now, this avoids operations such as tf.reshape truncating input from int64 to int32.
Added support for fault tolerance of the scheduler.
Added support for sharing dataset graphs by sharing the file system rather than through RPC. This reduces the burden on the scheduler, thereby improving the performance of the distributed dataset.
Improvements to functional API refactoring:
The construction of functional models does not require maintenance of global workspace diagrams, thus eliminating memory leaks, especially when building many models or extended models.
The construction of functional models should be 8-10% faster on average.
The functional model can now contain non-symbolic values in the call input within the first location parameter.
Now, several TF ops that cannot be reliably converted to a Keras layer during functional API construction should work, such as tf.image.ssim_multiscale.
Error messages when Functional API construction errors (or when operations cannot be automatically converted to Keras layers) should be more accurate and easier to understand.
Overall, the new functionality of TensorFlow is necessary because it adds the necessary elements to enhance performance and remove irrelevant elements. The improvements introduced will help to develop a more reliable and improved ML model.
The above is the example of the TensorFlow release of its newly updated TensorFlow 2.4.0-rc4 shared by Xiaobian. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.