leader

Myriad X NCS2

Image Description
View Markdown

I am running on Ubuntu 16.04 with the Neural Compute Stick 2 (NCS2). I created another post that shows my fails.

Download and Install OpenVINO

The following is from OpenVINO Install Guide


Download with OpenVINO with browser.
cd ~/Downloads
tar xvf l_openvino_toolkit_.tgz
cd l_openvino_toolkit_
./install_cv_sdk_dependencies.sh
./install_GUI.sh

Install External Dependencies


cd /opt/intel/openvino/install_dependencies
sudo -E ./install_openvino_dependencies.sh

Define Environment Variables

Add the following to .bashrc.


source /opt/intel/openvino/bin/setupvars.sh

Configure Model Optimizer


cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
sudo ./install_prerequisites.sh

Verify OpenVINO Install

This verifies openVino was installed correctly. It does not use the USB NCS2 Stick.


cd /opt/intel/openvino/deployment_tools/demo
./demo_squeezenet_download_convert_run.sh

This verifies the Inference Pipeline is working correctly. It does not use the USB NCS2 Stick.


cd /opt/intel/openvino/deployment_tools/demo
./demo_security_barrier_camera.sh

Setup USB Rules for NCS2

Load the USB Rules for the NCS2 Myriad X.


sudo usermod -a -G users "$(whoami)"
sudo cp /opt/intel/openvino/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules
sudo udevadm trigger
sudo ldconfig

Compile the C++ Examples

Run CMake to create Makefiles and Executables. The executables are in intel64/Release.

mkdir -p ~/ncs2/build
cd ~/ncs2/build
cmake -DCMAKE_BUILD_TYPE=Release $INTEL_OPENVINO_DIR/deployment_tools/inference_engine/samples

Download Sample Images and Videos


cd ~/ncs2
git clone https://github.com/intel-iot-devkit/sample-videos.git videos

Download Pre-Trained Models


$INTEL_OPENVINO_DIR/deployment_tools/tools/model_downloader/downloader.py -h
$INTEL_OPENVINO_DIR/deployment_tools/tools/model_downloader/downloader.py --all

IMPORTANT UNDOCUMENTED STEP

I needed to copy this file to solve an error.


sudo cp /opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/lib/intel64/MvNCAPI-ma2480.mvcmd /usr/local/lib/mvnc/MvNCAPI-ma2480.mvcmd
Compile models with FP16

The Model Optimizer, mo.py**, defaults to FP32. FP32 is the only format supported by the CPU but the NCS2 only supports FP16. Let's re-run the Model Optimizer with --data_type FP16 in the builds/intel64/Release folder.


mkdir -p ~/ncs2/FF16
cd ~/ncs2/FP16
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
python3 $INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py --framework caffe --input_model ~/ncs2/classification/alexnet/caffe/alexnet.caffemodel --data_type FP16

Run the Classification_sample Example


cd ~/ncs2/builds/intel64/Release
./classification_sample -i ~/ncs2/images/cat.jpg -m ~/ncs2/FP16/alexnet.xml -nt 5 -d MYRIAD

SUCCESS - The NCS2 is Alive!

The NCS2 is now running with the following results.


jwrr@jwrr:~/ncs2/builds/intel64/Release$ ./classification_sample -i ~/ncs2/sample-images/cat.jpg -m alexnet.xml -nt 5 -d MYRIAD
[ INFO ] InferenceEngine:
	API version ............ 1.6
	Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /home/jwrr/ncs2/sample-images/cat.jpg
[ INFO ] Loading plugin

API version ............ 1.6 Build .................. 22443 Description ....... myriadPlugin [ INFO ] Loading network files: alexnet.xml alexnet.bin [ INFO ] Preparing input blobs [ WARNING ] Image is resized from (1000, 667) to (227, 227) [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ INFO ] Starting inference (1 iterations) [ INFO ] Processing output blobs Top 5 results: Image /home/jwrr/ncs2/sample-images/cat.jpg classid probability ------- ----------- 281 0.6025391 285 0.1987305 282 0.1488037 287 0.0465088 289 0.0012226 . total inference time: 25.2723787 Average running time of one iteration: 25.2723787 ms Throughput: 39.5688911 FPS [ INFO ] Execution successful

Run with Labels

The Python examples have a command-line option to display class names instead of just classid. The C++ examples do not have this opton. I wrote a simple program to append a label. It's on Github at github.com/jwrr/ncs2-labeller

The Squeezenet labels file also supports Alexnet.


cp ~/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels ~/ncs2
./classification_sample -i ~/ncs2/images/cat.jpg -m FP16/alexnet.xml -nt 5 -d MYRIAD | labeller ~/ncs2/squeezenet1.1.labels