Simple image classification on raspberry pi from pi-camera (in live time) using the pre-trained model mobilenet_v1
and TensorFlow Lite (output to terminal)
Short summary:
In this article, I will explain, how to create simple image classification on raspberry pi from pi-camera (in live time) using the pre-trained model mobilenet_v1
and TensorFlow Lite (output to terminal). All code is available here and here.
Note before you start:
So, Let’s start :)
Hardware preparation:
Software preparation:
1 preparing VNC (if you have)
If you go to VNC Server Options (right-click on the VNC status icon in the top right) and check to Enable direct capture mode on the Troubleshooting page. This will allow you to see the camera output via VNC. Like on the screenshots:
2 Preparing Raspberry pi
For that, you need to run the next code:
#install tflite 2.5.0 to your raspberry pi
pip3 install https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp37-cp37m-linux_armv7l.whl#clone tensorflow repo
git clone https://github.com/tensorflow/examples --depth 1cd examples/lite/examples/image_classification/raspberry_pi# The script takes an argument specifying where you want to save the model files
bash download.sh /tmprm -rf classify_picamera.py
3 Download my Python script
This script I little modified from here
sudo wget https://raw.githubusercontent.com/oleksandr-g-rock/image_classification_out_to_terminal/main/classify_picamera.py
4 Run script
python3 classify_picamera.py --model /tmp/mobilenet_v1_1.0_224_quant.tflite --labels /tmp/labels_mobilenet_quant_v1_224.txt
5 You should see something like that.
Result:
In this article, we created simple image classification on raspberry pi from pi-camera (in live time) using the pre-trained model mobilenet_v1
and TensorFlow Lite (output to terminal). All code is located here and here.