In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces how to use Android Things and TensorFlow to apply machine learning on the Internet of Things. The introduction in this article is very detailed and has certain reference value. Interested friends must read it!
This project explores how machine learning can be applied to the Internet of Things. Specifically, for the IoT platform we will use Android Things, while for the machine learning engine we will use Google TensorFlow.
Android Things is now in a stable version called Android Things 1.0, which is ready for production use. As you may already know, the Raspberry Pi is a platform that supports Android Things 1.0 for development and prototyping. This tutorial will use Android Things 1.0 and Raspberry Pi, but you can switch to other supported platforms without changing the code. This tutorial is about how to apply machine learning to the Internet of Things, which is Android Things Raspberry Pi.
Machine learning on the Internet of Things is one of the hottest topics. The simplest definition of machine learning is probably the Wikipedia definition:
Machine learning is an area of computer science where computers can "learn"(i.e., gradually improve performance on specific tasks) to use data without explicit programming.
In other words, after training, the system can predict outcomes even if it is not specifically programmed for them. On the other hand, we all know the concept of IoT and connected devices. One of the most promising areas is how to apply machine learning to the Internet of Things to build expert systems so that you can develop a system that can "learn." In addition, this knowledge can be used to control and manage physical objects. Before diving into the details of Android Things, you should install it on your device first. If you're the first to use Android Things, you can read this tutorial on how to install Android Things on your device.
Here are a few areas where machine learning and IoT applications are generating significant value, just to mention a few interesting ones:
Predictive Maintenance in the Industrial Internet of Things (IIoT)
In the consumer Internet of Things, machine learning can make devices smarter by adapting them to our habits
In this tutorial, we want to explore how to apply machine learning to the Internet of Things using Android Things and TensorFlow. The basic idea behind the Adnroid Things IoT project is to explore how to build a driverless car that can recognize basic shapes (such as arrows) on the road ahead and control its direction. We've covered how to build a driverless car using Android Things, so we recommend you read that tutorial before starting this project.
This machine learning and IoT project encompasses the following themes:
How to configure TensorFlow environments using Docker
How to train TensorFlow system
How to use Android Things to integrate TensorFlow
How to use TensorFlow to control driverless cars
This project originated from the Android Things TensorFlow image classifier.
Let's get started!
How to use Tensorflow image recognition
Before starting, you need to install and configure the TensorFlow environment. I'm not an expert in machine learning, so I needed to find something fast and usable so we could build TensorFlow image recognizers. To do this, we use Docker to run a TensorFlow image. Here are the steps:
1. Clone TensorFlow repository:
git clone https://github.com/tensorflow/tensorflow.gitcd /tensorflowgit checkout v1.5.0
Create a directory (/tf-data) that will hold all files used in this project.
3. Run Docker:
docker run -it \--volume /tf-data:/tf-data \--volume /tensorflow:/tensorflow \--workdir /tensorflow tensorflow/tensorflow:1.5.0 bash
Using this command, we run an interactive TensorFlow environment that mounts directories used during project usage.
How to train TensorFlow to recognize images
Before Android Things can recognize images, we need to train the TensorFlow engine to build its models. To do this, we need to collect some images. As mentioned earlier, we need to use arrows to control Android Things driverless cars, so we need to collect at least four types of arrows:
the upward arrow
down arrow
the arrow to the left
the arrow to the right
To train the system, we need to use these four different types of images to create a "knowledge base." Create a directory called images under/tf-data, and then create four subdirectories under it with the following names:
up-arrow
down-arrow
left-arrow
right-arrow
Now, let's go find the pictures. I used Google Image Search, but you can use other methods. To simplify the image download process, you can install a Chrome download plugin so that you can download selected images with just one click. Don't forget to download more pictures, so that the training effect is better, of course, this model creation time will increase accordingly.
extended reading
How to use APIs to integrate Android Things
How to use Android Things with Firebase
Open your browser and start looking for pictures of four arrows:
TensorFlow image classifier
I downloaded 80 images per category. Never mind the extension of the image file.
Do the following once for all categories of images (under Docker):
python /tensorflow/examples/image_retraining/retrain.py \ --bottleneck_dir=tf_files/bottlenecks \--how_many_training_steps=4000 \--output_graph=/tf-data/retrained_graph.pb \--output_labels=/tf-data/retrained_labels.txt \--image_dir=/tf-data/images
This process requires patience and it takes a long time. After you finish, you will find the following two files in the/tf-data directory:
retrained_graph.pb
retrained_labels.txt
*** files contain the resulting models from TensorFlow training, while the second file contains labels associated with our four image classes.
How to test Tensorflow models
If you want to test the model to see if it works as expected, you can use the following command:
python scripts.label_image \--graph=/tf-data/trained-graph.pb \--image=/tf-data/images/[category]/[image_name.jpg] optimization model
Before using our TensorFlow model in Android Things, we need to optimize it:
python /tensorflow/python/tools/optimize_for_inference.py \--input=/tf-data/retrained_graph.pb \--output=/tf-data/opt_graph.pb \--input_names="Mul" \--output_names="final_result"
That's our model. We'll use this model to integrate TensorFlow with Android Things and apply machine learning to IoT or more tasks. The goal is to use the Android Things app to intelligently recognize arrow images and react to the direction control of the next driverless car.
If you want to learn more about TensorFlow and how to generate models, check out the official documentation and this tutorial.
How to apply machine learning to the Internet of Things using Android Things and TensorFlow
Once TensorFlow's data model is ready, we move on to the next step: how to integrate Android Things with TensorFlow. To this end, we divide this task into two steps:
Hardware, we'll connect motors and other components to the Android Things board
implement this application
Android Things diagram
Before delving into how to connect peripherals, here is a list of the components used in this Android Things project:
Android Things development board (Raspberry Pi 3)
Raspberry Pi camera
one LED lamp
LN298N Dual H-bridge motor drive module (connected to control motor)
A driverless car chassis with two wheels
I won't repeat how to use Android Things to control motors, as I've covered them in previous articles.
Below is a schematic diagram:
Integrating Android Things with IoT
The camera is not shown in the picture above. The final result is shown below:
Integrating Android Things with TensorFlow
Android Things app with TensorFlow
*** One step is to implement the Android Things app. To do this, we can reuse the example code on Github called TensorFlow Image Classifier Example. Before you start, clone the Github repository so you can modify the source code.
This Android Things app is different from the original app because:
It doesn't use buttons to turn on camera image capture
It uses different models.
It uses a flashing LED to indicate that the camera will take pictures after the LED stops flashing.
When TensorFlow detects an image (arrow) it controls the motor. In addition, it will turn on the motor for 5 seconds before the cycle of step 3 begins.
To make the LED blink, use the following code:
private Handler blinkingHandler = new Handler();private Runnable blinkingLED = new Runnable() { @Override public void run() { try { // If the motor is running the app does not start the cam if (mc.getStatus()) return ; Log.d(TAG, "Blinking.. "); mReadyLED.setValue(! mReadyLED.getValue()); if (currentValue
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.