Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the function of Cat and Dog Identification by tensorflow in python

2025-03-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces the python tensorflow how to achieve cat and dog identification function, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.

First, the number of cat and dog data sets is composed of traincats:1000, dogs:1000testcats: 500 Dogs 500 validationcats500 DogsRod 500 II. Data import train_dir = 'Data/train'test_dir =' Data/test'validation_dir = 'Data/validation'train_datagen = ImageDataGenerator (rescale=1/255, rotation_range=10, width_shift_range=0.2, # the angle of horizontal offset of the picture height_shift_range=0.2 # Angle of image numerical offset shear_range=0.2, # shear strength zoom_range=0.2, # random zoom amplitude horizontal_flip=True # whether to perform random horizontal flipping # fill_mode='nearest') train_generator = train_datagen.flow_from_directory (train_dir, (224224), batch_size=1,class_mode='binary',shuffle=False) test_datagen = ImageDataGenerator (rescale=1/255) test_generator = test_datagen.flow_from_directory (test_dir) (224224), batch_size=1,class_mode='binary',shuffle=True) validation_datagen = ImageDataGenerator (rescale=1/255) validation_generator = validation_datagen.flow_from_directory (validation_dir, (224224) batch_size=1,class_mode='binary') print (train_datagen) print (test_datagen) print (train_datagen) Dataset construction

Here, I extract the data from the ImageDataGenerator class, store the data and tags in two lists, and then convert the data to np.array, or use model.fit_generator. I put the data in memory so that the model training can read the data faster when adjusting the parameters later, instead of having to read the data for a whole round of each training (it should be like this … This is how I understand... )

Note that after my dataset is built, all three kinds of data are stored in memory, and my computer has 16g of memory that can be stored down.

Train_data= [] train_labels= [] a=0for data_train, labels_train in train_generator: train_data.append (data_train) train_labels.append (labels_train) a=a+1 if a > 1999: breakx_train=np.array (train_data) y_train=np.array (train_labels) x_train=x_train.reshape Labels_test in test_generator: test_data.append (data_test) test_labels.append (labels_test) a=a+1 if a > 999: breakx_test=np.array (test_data) y_test=np.array (test_labels) x_test=x_test.reshape [] validation_labels= [] a=0for data_validation Labels_validation in validation_generator: validation_data.append (data_validation) validation_labels.append (labels_validation) a=a+1 if a > 999: breakx_validation=np.array (validation_data) y_validation=np.array (validation_labels) x_validation=x_validation.reshape Model building model1 = tf.keras.models.Sequential ([# layer 1 convolution, convolution kernels are 16, input is 150, 150, 1 tf.keras.layers.Conv2D (16, (3), activation='relu',padding='same',input_shape= (224)), tf.keras.layers.MaxPooling2D ((2)), # second layer convolution, convolution kernel is 3, 3, a total of 32 Tf.keras.layers.Conv2D (32, (3), activation='relu',padding='same'), tf.keras.layers.MaxPooling2D ((2)), # the third layer convolution, the convolution kernel is 3: 3, a total of 64 Tf.keras.layers.Conv2D (64, (3), activation='relu',padding='same'), tf.keras.layers.MaxPooling2D ((2)), # data smoothing tf.keras.layers.Flatten (), tf.keras.layers.Dense (64), tf.keras.layers.Dropout (0.5), tf.keras.layers.Dense (1) print (model1.summary ())

Model summary:

Fifth, model training model1.compile (optimize=tf.keras.optimizers.SGD (0.00001), loss=tf.keras.losses.binary_crossentropy, metrics= ['acc']) history1=model1.fit (x-ray training, y-ray training, validation_split= (0,1) select a certain proportion for verification set. Can be overridden by validation_data validation_data= (xerovalidationjinyproof validation), batch_size=10, shuffle=True, epochs=10) model1.save ('cats_and_dogs_plain1.h6') print (history1)

Plt.plot (history1.epoch,history1.history.get ('acc'), label='acc') plt.plot (history1.epoch,history1.history.get (' val_acc'), label='val_acc') plt.title ('correct') plt.legend ()

We can see that the generalization ability of our model is still a little poor, the acc of the test set can reach more than 0.85, but the verification set beats before 0.65 to 0.70.

Sixth, the model test model1.evaluate (xcoded validationand yearly validation)

Finally, the correct rate of our model on the test set is 0.67, it can be said that it is not good enough, it is a bit over-fitting, it may be that there are not enough training data, and then we can expand the data or call part of the data from the verification set and test set to train the model.

Thank you for reading this article carefully. I hope the article "how to solve the cat and dog identification function of tensorflow in python" shared by the editor will be helpful to everyone. At the same time, I also hope that you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report