In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article shows you how to achieve a multiple linear regression classifier in TensorFlow, the content is concise and easy to understand, it can definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
# Import the required modules
#-*-coding: utf-8-*-
Import tensorflow as tf
Import numpy as np
Import pandas as pd
From sklearn import datasets
Import os
# this function is used to obtain iris data using sklearn, and then save it locally for later use.
Def make_iris ():
Iris = datasets.load_iris ()
X = pd.DataFrame (iris.data)
Y = pd.DataFrame (iris.target) .values
Y_onehot = tf.one_hot (y, 3)
Sess = tf.InteractiveSession ()
Y_onehot_value = sess.run (y_onehot). Reshape (150,3)
Y_onehot_value = pd.DataFrame (y_onehot_value)
X.to_csv ("iris_x.csv", sep=',', header=None, index=None)
Y_onehot_value.to_csv ("iris_y.csv", sep=',', header=None, index=None)
# to define the model, it is necessary to distinguish what the in_size,out_size stands for. For example, for the iris dataset, there are four independent variables and one dependent variable, but after we encode the label with one_hot, the label becomes 3D. So here In_size is the dimension of the training data, that is, the number of variables. And out_size is the dimension of output, that is, the dimension of dependent variable, so it is 3. 5%.
Generally, for a multivariate linear regression model, it can be written in the form of a matrix, Y=WX+b, where W is 4x3, x is 150x4, b is 150x3, so the dimension of Y is (150x4) x (4x3) + (150x3) = 150x3 (probability of belonging to a certain category), and the final output of the model is softmax multi-classification function, so each sample will have a probability value belonging to a different category.
Def model (inputs, in_size, out_size):
Weights = tf.Variable (tf.random_normal ([in_size, out_size]))
Biases = tf.Variable (tf.zeros ([1, out_size]))
Outputs = tf.nn.softmax (tf.matmul (inputs, Weights) + biases)
Return outputs
# define model training function
Def train ():
# the first thing is to read the data and use the data saved by the function above
# read the training data in, because pandas reads
# DataFrame object, which is converted to the numpy.ndarry type through the values property.
X_data = pd.read_csv ("iris_x.csv", header=None) .values
Y_data = pd.read_csv ("iris_y.csv", header=None) .values
The next step is to divide the data into a training set and a test set.
Train_x = x_data [0 120,:]
Train_y = y_data [0 120,:]
Test_x = x_data [120 151,:]
Test_y = y_data [120 151,:]
Print train_x.shape
Print test_x.shape
Print train_y.shape
Print test_y.shape
# define placeholder, which may not be defined, but not later
# use the displayed feed, just run the optimization target directly. this
# still need to pay attention to the meaning represented by the dimensions of holder, don't be confused.
X_data_holder = tf.placeholder (tf.float32, [None, 4])
Y_data_holder = tf.placeholder (tf.float32, [None, 3])
# call the model and output the prediction result
Y_prediction = model (x_data_holder, 4,3)
# define cross entropy loss function
Cross_entropy = tf.reduce_mean (
-tf.reduce_sum (y_data_holder *
Tf.log (y_prediction), reduction_indices= [1])
# the gradient descent method is used to minimize the loss function.
Train_step = tf.train.GradientDescentOptimizer (0.1)\
.minimize (cross_entropy)
# start session.
With tf.Session () as sess:
Init = tf.global_variables_initializer ()
Sess.run (init)
Epoch = 2000
For e in range (epoch):
Sess.run (train_step
Feed_dict= {x_data_holder: train_x
Y_data_holder: train_y})
# calculate the loss every 50 times, and note that the loss here is training
# practice the loss of data, and this loss is a single-step loss
# not the loss of all data.
If e% 50 = = 0:
Train_loss = sess.run (cross_entropy
Feed_dict= {x_data_holder: train_x
Y_data_holder: train_y})
Y_pre = sess.run (y_prediction
Feed_dict= {x_data_holder: test_x})
Correct_prediction = tf.equal (tf.argmax (y_pre, 1)
Tf.argmax (test_y, 1))
The # eval function can convert the tensor type to a specific value or not run it.
# print correct_prediction.eval (session=sess)
Accuracy = tf.reduce_mean (tf.cast (correct_prediction)
Tf.float32))
# finally, use the test data to calculate the prediction accuracy of the test data.
Test_acc = sess.run (accuracy
Feed_dict= {x_data_holder: test_x
Y_data_holder: test_y})
Print "acc: {}; loss: {}" .format (test_acc, train_loss)
# to calculate the loss of all data, you need to run the loss at the end.
Training_cost = sess.run (cross_entropy
Feed_dict= {x_data_holder: train_x
Y_data_holder: train_y})
Print "Training cost= {}" .format (training_cost)
If _ name__ = = "_ _ main__":
Train () the above is how to implement a multiple linear regression classifier in TensorFlow. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.