Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to create custom InceptionV3 and CNN architectures for indoor and outdoor fire detection

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces how to create a custom InceptionV3 and CNN architecture for indoor and outdoor fire detection, which has a certain reference value, and interested friends can refer to it. I hope you can learn a lot after reading this article.

Create custom InceptionV3 and CNN architectures for indoor and outdoor fire detection.

The latest development of embedded processing technology has enabled vision-based systems to use convolution neural networks to detect fires in the monitoring process. In this article, two custom CNN models have been implemented with a cost-effective fire detection CNN architecture for monitoring video. The first model is a basic AlexNet architecture that is customized by the CNN architecture. We will implement and view its output and limitations, and create a custom InceptionV3 model. In order to balance efficiency and accuracy, the model is fine-tuned taking into account the target problem and the nature of fire data. We will use three different data sets to train our model.

Create a custom CNN schema

We will use TensorFlow API Keras to build the model. First, we create an ImageDataGenerator for tagging the data. [1] and [2] data sets are used here for training. Finally, we will provide 980 images for training and 239 images for verification. We will also use data enhancement.

Import tensorflow as tfimport keras_preprocessingfrom keras_preprocessing import imagefrom keras_preprocessing.image import ImageDataGeneratorTRAINING_DIR = "Train" training_datagen = ImageDataGenerator (rescale = 1.max 255, horizontal_flip=True, rotation_range=30, height_shift_range=0.2 Fill_mode='nearest') VALIDATION_DIR = "Validation" validation_datagen = ImageDataGenerator (rescale = 1.amp255) train_generator = training_datagen.flow_from_directory (TRAINING_DIR, target_size= (224224), class_mode='categorical' Batch_size = 64) validation_generator = validation_datagen.flow_from_directory (VALIDATION_DIR, target_size= (224224), class_mode='categorical' Batch_size= 16) three data enhancement techniques are applied in the above code They are horizontal flip, rotation and height shift, respectively. Now we will create our CNN model. The model consists of three pairs of Conv2D-MaxPooling2D layers, followed by three dense layers. To overcome the problem of overfitting, we will also add a dropout layer. The last layer is the softmax layer, which will provide us with the probability distribution of fire and non-fire. By changing the number of classes to 1, you can also use the 'Sigmoid' activation function at the last layer.

From tensorflow.keras.optimizers import Adammodel = tf.keras.models.Sequential ([tf.keras.layers.Conv2D (96, (11)), strides= (4), activation='relu', input_shape= (224,224,3)), tf.keras.layers.MaxPooling2D (pool_size = (3)), strides= (2)), tf.keras.layers.Conv2D (256, (5), activation='relu'), tf.keras.layers.MaxPooling2D (pool_size = (3), strides= (2)), tf.keras.layers.Conv2D (384) Activation='relu'), tf.keras.layers.MaxPooling2D (pool_size = (3), strides= (2)), tf.keras.layers.Flatten (), tf.keras.layers.Dropout (0.2), tf.keras.layers.Dense (2048, activation='relu'), tf.keras.layers.Dropout (0.25), tf.keras.layers.Dense (1024, activation='relu'), tf.keras.layers.Dropout (0.2), tf.keras.layers.Dense (2) Activation='softmax')]) model.compile (loss='categorical_crossentropy',optimizer=Adam (lr=0.0001), metrics= ['acc']) history = model.fit (train_generator,steps_per_epoch = 15 metrics= epochs = 50 vehicle data = validation_generator,validation_steps = 15) We will use Adam as the optimizer with a learning rate of 0.0001. After 50 periods of training, we have obtained 96.83 training accuracy and 94.98 verification accuracy. The training loss and verification loss are 0.09 and 0.13 respectively.

Our training model allows us to test all the images in the model to see if its guess is correct. For testing, we selected three images, including images with fire, images without fire, and photos containing fire colors and shadows. We finally get that the model created above made a mistake in classifying the image. The model is 52% sure that there is a flame in the image. This is because there are few images in the trained data set to illustrate the model of indoor fire. Therefore, the model only knows the situation of outdoor fire, and there will be an error when giving a shadow image of indoor fire. Another reason is that our model does not have the complex features to learn from fire.

Next, we will use the standard InceptionV3 model and customize it. Complex models can learn complex features from images.

Create a custom InceptionV3 model

This time we will use different data sets [3], which contain images of outdoor and indoor fires. We have trained our previous CNN model in this data set, and the results show that it is overfitted because it can not handle this relatively large data set and learn complex features from images.

Let's start creating an ImageDataGenerator for a custom InceptionV3. The dataset contains three classes, but for this article, we will use only two classes. It contains 1800 images for training and 200 images for verification. In addition, I added eight living room images to add some noise to the dataset.

Import tensorflow as tfimport keras_preprocessingfrom keras_preprocessing import imagefrom keras_preprocessing.image import ImageDataGeneratorTRAINING_DIR = "Train" training_datagen = ImageDataGenerator (rescale=1./255,zoom_range=0.15,horizontal_flip=True,fill_mode='nearest') VALIDATION_DIR = "/ content/FIRE-SMOKE-DATASET/Test" validation_datagen = ImageDataGenerator (rescale=1./255) train_generator = training_datagen.flow_from_directory (TRAINING_DIR,target_size= (224224), shuffle = True,class_mode='categorical' Batch_size= 128) validation_generator = validation_datagen.flow_from_directory (VALIDATION_DIR,target_size= (224224), class_mode='categorical',shuffle = True,batch_size= 14) to make the training more accurate We can use data enhancement technology. Two data enhancement techniques are applied in the above code-horizontal flipping and scaling. Let's import the InceptionV3 model from Keras API. We will add layers at the top of the InceptionV3 model, as shown below. We will add a global spatial average pooling layer, followed by 2 dense layers and 2 dropout layers to ensure that our model does not overfit. Finally, we will add a dense layer of softmax activation for the two categories. Next, we will first train only the layers we added and initialized randomly. We will use RMSprop as the optimizer here.

From tensorflow.keras.applications.inception_v3 import InceptionV3from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.models import Modelfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Input, Dropoutinput_tensor = Input (shape= (224,224,3) base_model = InceptionV3 (input_tensor=input_tensor, weights='imagenet', include_top=False) x = base_model.outputx = GlobalAveragePooling2D () (x) x = Dense (2048, activation='relu') (x) x = Dropout (2048) (x) x = Dense (1024) Activation='relu') (x) x = Dropout (0.2) (x) predictions = Dense (2, activation='softmax') (x) model = Model (inputs=base_model.input, outputs=predictions) for layer in base_model.layers: layer.trainable = Falsemodel.compile (optimizer='rmsprop', loss='categorical_crossentropy', metrics= ['acc']) history = model.fit (train_generator,steps_per_epoch = 14 epochs = 20) after 20 cycles of training at the top level We will freeze the first 249 layers of the model and train the remaining layers (that is, the top 2 initial blocks). Here, we will use SGD as the optimizer with a learning rate of 0.0001.

# To train the top 2 inception blocks, freeze the first 249 layers and unfreeze the rest.for layer in model.layers [: 249]: layer.trainable = Falsefor layer in model.layers [249:]: layer.trainable = True#Recompile the model for these modifications to take effectfrom tensorflow.keras.optimizers import SGDmodel.compile (optimizer=SGD (lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics= ['acc']) history = model.fit Validation_steps = 14) after 10 cycles of training We have obtained 98.04 training accuracy and 96.43 verification accuracy. The training loss and verification loss are 0.063 and 0.118 respectively.

During the training process of the above 10 periods, we tested our model with the same image to see if it could be guessed correctly. This time our model can make all three predictions correct. 96% of the assurance is that there is no fire in the image. The other two images I used to test are as follows:

Non-fire images from the dataset referenced below

Real-time testing

Our model is now ready to be tested in a real-world scenario. The following is sample code that uses OpenCV to access our webcam and predict whether fire is included in each image. If the frame contains flames, we want to change the color of the frame to BlockW.

Import cv2import numpy as npfrom PIL import Imageimport tensorflow as tffrom keras.preprocessing import image#Load the saved modelmodel = tf.keras.models.load_model ('InceptionV3.h6') video = cv2.VideoCapture (0) while True: _, frame = video.read () # Convert the captured frame into RGB im = Image.fromarray (frame,' RGB') # Resizing into 224x224 because we trained the model with this image size. Im = im.resize ((224224)) img_array = image.img_to_array (im) img_array = np.expand_dims (img_array, axis=0) / 255probabilities = model.predict (img_array) [0] # Calling the predict method on model to predict 'fire' on the image prediction = np.argmax (probabilities) # if prediction is 0, which means there is fire in the frame. If prediction = 0: frame = cv2.cvtColor (frame, cv2.COLOR_RGB2GRAY) print (probabilities] cv2.imshow ("Capturing", frame) key=cv2.waitKey (1) if key= = ord ('q'): breakvideo.release () cv2.destroyAllWindows () the Github link for this project is here: https://github.com/DeepQuestAI/Fire-Smoke-Dataset. You can find the dataset and all the code above from there. Thank you for reading this article carefully. I hope the article "how to create a custom InceptionV3 and CNN architecture for indoor and outdoor fire detection" shared by the editor will be helpful to you. At the same time, I hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report