In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
In this article, the editor introduces in detail the differences and parameters of fit () and fit_generator () in Keras for you. The content is detailed, the steps are clear, and the details are handled properly. I hope that this article "the difference and parameters of fit () and fit_generator () in Keras" can help you solve your doubts.
1. The difference between fit and fit_generator
First of all, the x_train and y_train passed into the fit () function in Keras are fully loaded into memory, which is of course very convenient to use, but if we have a large amount of data, it is impossible to load all the data into memory, which will inevitably lead to memory leaks. At this time, we can use the fit_generator function for training.
The following is an example of fit passing parameters: history = model.fit (x_train, y_train, epochs=10,batch_size=32, validation_split=0.2)
Here you need to give epochs and batch_size,epoch how many times the dataset will be rotated. Batch_size refers to how many batch the dataset is divided into for processing.
Finally, the size of the cross-validation set can be given, where 0.2 refers to 20% of the training set.
The fit_generator function must be passed into a generator, and our training data is also generated through the generator. Here is a simple generator function:
Batch_size = 128def generator (): while 1: row = np.random.randint (0Len (x_train), size=batch_size) x = np.zeros ((batch_size,x_train.shape [- 1])) y = np.zeros ((batch_size,)) x = x _ mom [row] y = y _ mom [row] yield XPY
The generator function here I generate is a batch_size of 128 data, this is just a demo. If I don't specify the size of the batch_size in the generator, that is, I generate one data at a time, then the parameter steps_per_epoch is different when using fit_generator.
I've been confused about the pit here for a long time, although it's not a big problem.
The following is the parameter passed by the fit_generator function:
History = model.fit_generator (generator (), epochs=epochs,steps_per_epoch=len (x_train) / / (batch_size*epochs)) 2, batch_size and steps_per_epoch
First of all, batch_size = dataset size / steps_per_epoch. If we set the size of batch_size in the generating function, then when passing parameters to fit_generator, steps_per_epoch=len (x_train) / / (batch_size*epochs)
I need to complete the demo code:
From keras.datasets import imdbfrom keras.preprocessing.sequence import pad_sequencesfrom keras.models import Sequentialfrom keras import layersimport numpy as npimport randomfrom sklearn.metrics import F1 accurate score x_test = 10000maxlen = 500batch_size = 32 (x_train, y_train), (x_test, y_test) = imdb.load_data (num_words=max_features) x_train = pad_sequences Maxlen=maxlen) def generator (): while 1: row = np.random.randint (0Llen (x_train), size=batch_size) x = np.zeros ((batch_size,x_train.shape [- 1])) y = np.zeros ((batch_size,)) x = x _ mom [row] y = y _ railway [row] yield x Y # generator () model = Sequential () model.add (layers.Embedding (max_features,32,input_length=maxlen)) model.add (layers.GRU (64 layers.GRU)) model.add (layers.GRU (32)) # model.add (layers.Flatten ()) # model.add (layers.Dense (32)) model.add (layers.Dense (1)) model.compile (optimizer='rmsprop',loss='binary_crossentropy' Metrics= ['acc']) print (model.summary ()) # history = model.fit (x_train, y_train, epochs=1,batch_size=32, validation_split=0.2) history = model.fit_generator (generator (), epochs=1,steps_per_epoch=len (x_train) / / (batch_size)) print (model.evaluate
Add: detailed interpretation of model.fit_generator ()
As follows:
From keras import modelsmodel = models.Sequential () first
Using keras, build the sequence model, and omit the specific construction steps. After building, we need to feed the data into the model for training, there are many ways to send data, models.fit_generator () is one of them.
Specifically, model.fit_generator () is a way to send data to the model in batches using the generator, which can effectively save the consumption of single memory.
The specific function forms are as follows: fit_generator (self, generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_q_size=10, workers=1, pickle_safe=False, initial_epoch=0) parameters are explained:
Generator: generally a generator function
Steps_per_epochs: refers to the number of times the generator executes the generated data in each epoch. If steps_per_epochs=100 is set, this is shown in the following figure.
Epochs: the number of iterations required in the training process
Verbose: the default value is 1, which refers to the display mode of the log during training. 1 means "progress bar mode", 2 means "one line per round", and 0 means "quiet mode".
Validation_data and validation_steps refer to the verification set, which is used in the same way as generator and steps_per_epoch
Models.fit_generator () returns a history object, and the history.history attribute records the continuous epoch training loss and evaluation values during training, as well as validation set loss and evaluation values, which can be retrieved in the following ways!
Acc = history.history ["acc"] val_acc = history.history ["val_acc"] loss = history.history ["loss"] val_loss = history.history ["val_loss"] here, the article "what are the differences and parameters between fit () and fit_generator () in Keras" has been introduced. If you want to master the knowledge points of this article, you still need to practice and use it before you can understand it. If you want to know more about related articles, Welcome to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.