Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the difference between the f.name_scope and tf.variable_scope functions of python

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article focuses on "what is the difference between python's f.name_scope and tf.variable_scope functions". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what's the difference between the f.name_scope and tf.variable_scope functions of python"!

The difference between the two

Tf.name_scope () and tf.variable_scope () are two scopes and are typically used in conjunction with the two functions tf.variable () and tf.get_variable () that create / call variables.

Why use two different scopes? The main reason is related to variable sharing.

Variable sharing mainly involves two functions: tf.Variable () and tf.get_variable ()

The tf.get_variable () function needs to be used under the scope of tf.variable_scope, because tf.get_variable () has a variable checking mechanism that detects whether an existing variable is set to a shared variable. If a shared variable with the same name exists, it will not report an error. If it is not set to a shared variable, it will report an error.

If you use tf.Variable (), you will create a new variable each time. But most of the time we want to reuse some variables, so we use get_variable (), which searches for variable names, uses them directly, and doesn't create new ones.

You need to use the flag reuse when sharing variables, which can be shared when reuse = True, but not when False.

Tf.variable_scope function tf.variable_scope (name_or_scope, default_name=None, values=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, reuse=None, dtype=None, use_resource=None, constraint=None, auxiliary_name_scope=True)

Where:

1. Name_or_scope: name of the scope.

2. Default_name: if the name_or_scope parameter is None, the default name is used, which will be unique; if name_or_scope is provided, it will not be used, so it is not required and can be None.

3. Values: a list of Tensor parameters passed to the operation function.

4. Initializer: the default initializer for variables in this range.

5. Regularizer: the default regularizer for variables in this range.

6. Caching_device: the default cache device for variables in this range.

7. Partitioner: the default partitioning program for variables in this range.

8. Custom_getter: the default custom inhalation of variables in this range.

9. Reuse: it can be True, None or tf.AUTO_REUSE;. If it is True, we can start to share variables and restructure variables; if it is tf.AUTO_REUSE, we create variables (if they do not exist), otherwise we return them (for creating variables in the first round); if it is None, we inherit the reuse flag of the parent scope.

10. Dtype: the type of variable created in this scope.

Test code 1. Initialize the first v1with tf.variable_scope ("scope1") using the reuse=True shared variable import tensorflow as tf#: v1 = tf.get_variable ("v1", [3jue 3], tf.float32,initializer=tf.constant_initializer (1)) print (v1.name) # different scopes with tf.variable_scope ("scope2"): v1 = tf.get_variable ("v1", [3jue 3], tf.float32 Initializer=tf.constant_initializer (1)) print (v1.name) # starts sharing with tf.variable_scope ("scope1", reuse = True): v1_share = tf.get_variable ("v1", [3je 3], tf.float32,initializer=tf.constant_initializer (1)) print (v1_share.name)

The running result is:

Scope1/v1:0

Scope2/v1:0

Scope1/v1:0

If you add it to the lower part

With tf.variable_scope ("scope2"): v1_share = tf.get_variable ("v1", [3mem3], tf.float32,initializer=tf.constant_initializer (1)) print (v1_share.name)

There is no reuse at this time, can not be shared, the program reported an error.

2. Use the AUTO_REUSE sharing variable import tensorflow as tf# and use AUTO_REUSE to create directly # if reuse= True, error def demo (): with tf.variable_scope ("demo", reuse=tf.AUTO_REUSE): v = tf.get_variable ("v", [1]) return vv1 = demo () v2 = demo () print (v1.name) when initializing the first round of creation

The running result is:

Demo/v:0

Demo/v:0

At this point, I believe you have a deeper understanding of "what is the difference between the f.name_scope and tf.variable_scope functions of python?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report