In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains "how to understand python advanced TensorFlow neural network to fit linear and nonlinear functions". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "how to understand python advanced TensorFlow neural network fitting linear and nonlinear functions"!
Catalogue
I. fitting linear function
Generate random coordinates
Neural network fitting
Code
Second, fitting nonlinear function
Generate quadratic random points
Neural network fitting
Code
I. fitting linear function
The learning rate is 0.03 and the training is 1000 times.
The learning rate is 0.05and the training is 1000 times.
The learning rate is 0.1and the training is 1000 times.
It can be found that the training effect is the best when the learning rate is 0.05.
Generate random coordinates
1. Generate x coordinates
2. Generate random interference
3. Calculate the y coordinate
4. Draw dots
# generate random point def Produce_Random_Data (): global x_data, y_data # generate x coordinate x_data = np.random.rand (100) # generate random interference noise = np.random.normal X_data.shape) # shape of mean standard deviation output # calculated y coordinates y_data = 0.2 * x_data + 0.3 + noise # draw point plt.scatter (x_data, y_data) neural network fitting
1. Create a neural network
2. Set optimizer and loss function
3. Training (according to available data)
4. Prediction (given Abscissa, Forecast ordinate)
# create neural network (training and prediction) def Neural_Network (): # 1 create neural network model = tf.keras.Sequential () # add layer model.add (tf.keras.layers.Dense (units=1) to neural network Input_dim=1) # number of neurons in the hidden layer # 2 set optimizer and loss function model.compile (optimizer=SGD) Loss='mse') # Optimizer learning rate 0.05loss function # SGD: random gradient descent method # mse: mean square error # 3 training for i in range (1000): # training data and return loss loss= model.train_on_batch (x_data Y_data) # print (loss) # 4 Forecast y_pred = model.predict (x_data) # 5 display Forecast result (fit Line) plt.plot (x_data, y_pred, 'rmuri' Lw=3) # lw: line weight code # fits the linear function import osos.environ ['TF_CPP_MIN_LOG_LEVEL'] =' 2'import numpy as npimport matplotlib.pyplot as pltimport tensorflow as tffrom tensorflow.keras.optimizers import SGD # to generate random points def Produce_Random_Data (): global x_data Y_data # generate x coordinates x_data = np.random.rand (100) # generate random interference noise = np.random.normal (0,0.01, x_data.shape) # shape of mean standard deviation output # calculate y coordinates y_data = 0.2 * x_data + 0.3 + noise # draw point plt.scatter (x_data Y_data) # create neural network (training and prediction) def Neural_Network (): # 1 create neural network model = tf.keras.Sequential () # add layer model.add (tf.keras.layers.Dense (units=1) to neural network Input_dim=1) # number of neurons in the hidden layer # 2 set optimizer and loss function model.compile (optimizer=SGD) Loss='mse') # Optimizer learning rate 0.05loss function # SGD: random gradient descent method # mse: mean square error # 3 training for i in range (1000): # training data and return loss loss= model.train_on_batch (x_data Y_data) # print (loss) # 4 Forecast y_pred = model.predict (x_data) # 5 display Forecast result (fit Line) plt.plot (x_data, y_pred, 'rMurray, lw=3) # lw: line thickness # 1, generate random point Produce_Random_Data () # 2, neural network training and prediction Neural_Network () plt.show () 2, fit nonlinear function
10 neurons in the first layer:
Five neurons in the first layer:
I feel that the training effect of 5 neurons in the first layer is better than 10.
Generate quadratic random points
Steps:
1. Generate x coordinates
2. Generate random interference
3. Calculate y coordinates
4. Draw scatter plot
# generate random point def Produce_Random_Data (): global x_data, y_data # generate x coordinate x_data = np.linspace (- 0.5,0.5,200) [:, np.newaxis] # add a dimension # generate noise noise = np.random.normal (0,0.02) X_data.shape) # mean Variance # calculate y coordinates y_data = np.square (x_data) + noise # scatter plot plt.scatter (x_data, y_data) neural network fitting
Steps:
1. Create a neural network
2. Set optimizer and loss function
3. Training (according to available data)
4. Prediction (given Abscissa, Forecast ordinate)
5. Drawing
# Neural network fitting (training and prediction) def Neural_Network (): # 1 create neural network model = tf.keras.Sequential () # add layer # Note: input_dim (number of input neurons) only needs to be set at the input layer The latter network can automatically infer the corresponding input model.add (tf.keras.layers.Dense (units=5, input_dim=1, activation='tanh')) # the number of neurons input the number of neurons activation function model.add (tf.keras.layers.Dense (units=1) Activation='tanh')) # 2 set optimizer and loss function model.compile (optimizer=SGD (0.3), loss='mse') # optimizer learning rate loss function (mean square error) # 3 training for i in range (3000): # training once data Return loss loss = model.train_on_batch (x_data, y_data) # 4 Forecast y_pred = model.predict (x_data) # 5 drawing plt.plot (x_data, y_pred, 'rmuri' Lw=5) Code # fits the nonlinear function import osos.environ ['TF_CPP_MIN_LOG_LEVEL'] =' 2'import numpy as npimport matplotlib.pyplot as pltimport tensorflow as tffrom tensorflow.keras.optimizers import SGD # to generate random points def Produce_Random_Data (): global x_data, y_data # generates x coordinates x_data = np.linspace (- 0.5,0.5,200) [: Np.newaxis] # add a dimension # generate noise noise = np.random.normal (0,0.02, x_data.shape) # mean variance # calculate y coordinates y_data = np.square (x_data) + noise # scatter plot plt.scatter (x_data Y_data) # Neural network fitting (training and prediction) def Neural_Network (): # 1 create neural network model = tf.keras.Sequential () # add layer # Note: input_dim (number of input neurons) only needs to be set at the input layer The latter network can automatically infer the corresponding input model.add (tf.keras.layers.Dense (units=5, input_dim=1, activation='tanh')) # the number of neurons input the number of neurons activation function model.add (tf.keras.layers.Dense (units=1) Activation='tanh') # number of output neurons # 2 set optimizer and loss function model.compile (optimizer=SGD (0.3), loss='mse') # optimizer learning rate loss function (mean square error) # 3 training for i in range (3000): # training once data Return loss loss = model.train_on_batch (x_data, y_data) # 4 Forecast y_pred = model.predict (x_data) # 5 drawing plt.plot (x_data, y_pred, 'rmuri, lw=5) # 1, generate random point Produce_Random_Data () # 2, neural network training and prediction Neural_Network () plt.show () to this point I believe you have a deeper understanding of "how to understand python advanced TensorFlow neural network to fit linear and nonlinear functions". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.