Click here to Skip to main content
15,885,278 members
Articles / Artificial Intelligence / Computer vision
Article

Installation Guide for Intel® Distribution of OpenVINO™ toolkit with Support for FPGA 2

21 Dec 2018CPOL 4.1K  
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision.

This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.

NOTES:

Introduction

The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel Distribution of OpenVINO toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

The Intel Distribution of OpenVINO toolkit for Linux* with FPGA Support:

  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across Intel® CPU, Intel® Integrated Graphics, Intel® FPGA, Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2
  • Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
  • Includes optimized calls for computer vision standards including OpenCV*, OpenCL™, and OpenVX*

Included with the Installation and installed by default:

Component Description
Model Optimizer

This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine.

Popular frameworks include Caffe*, TensorFlow*, and MXNet*.

Inference Engine This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications.
Drivers and runtimes for OpenCL™ version 2.1 Enables OpenCL on the GPU/CPU for Intel® processors
Intel® Media SDK Offers access to hardware accelerated video codecs and frame processing
Pre-compiled FPGA bitstream samples Pre-compiled bitstream samples for the Intel® Arria® 10 GX FPGA Development Kit, Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA and Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (preview)
Intel® FPGA SDK for OpenCL™ software technology The Intel® FPGA RTE for OpenCL™ provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files
OpenCV version 3.4.2 OpenCV* community version compiled for Intel® hardware. Includes PVL libraries for computer vision
OpenVX* version 1.1 Intel's implementation of OpenVX* 1.1 optimized for running on Intel® hardware (CPU, GPU, IPU)
Pre-trained models Set of Intel's pre-trained models for learning and demo purposes or to develop deep learning software.
Sample Applications A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. Additional information about building and running the samples can be found in the Inference Engine Developer Guide.

Development and Target Platform

The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.

Hardware

  • 6th-8th Generation Intel® Core™
  • Intel® Xeon® v5 family
  • Intel® Xeon® v6 family
  • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
  • Intel® Movidius™ Neural Compute Stick
  • Intel® Neural Compute Stick 2
  • Intel® Arria® 10 GX FPGA Development Kit or the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
  • Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (preview)

Processor Notes:

  • Processor graphics are not included in all processors. See Product Specifications for information about your processor.
  • A chipset that supports processor graphics is required for Intel® Xeon® processors.

Operating Systems:

  • Ubuntu* 16.04.x long-term support (LTS), 64-bit
  • CentOS* 7.4, 64-bit
  • Yocto Project* Poky Jethro* v2.0.3, 64-bit (for target only)

Overview

This guide provides step-by-step instructions on how to install Intel Distribution of OpenVINO toolkit with FPGA Support. This includes FPGA initialization and configuration steps. The following steps will be covered:

  1. Configure the Intel® Arria® 10 GX FPGA Development Kit
  2. Program the Intel® Arria® 10 GX FPGA Development Kit
  3. Install the OpenVINO™ Toolkit
  4. Configure the Model Optimizer
  5. Complete the Intel® Arria® 10 GX FPGA Development Kit Setup
  6. Run the Demos to Verify Installation and Compile Samples
  7. Program a Bitstream
  8. Run a Sample
  9. Use the Face Detection Tutorial

Installation Notes:

  • For a first-time installation, use all steps.
  • Use Steps 1 and 2 only after receiving a new FPGA card.
  • Repeat Steps 3-6 when installing a new version of Intel Distribution of OpenVINO toolkit.
  • Use Step 7 when a Neural Network topology used by an OpenVINO™ app changes.

Configure Intel® Arria® 10 GX FPGA Development Kit

To Configure the Intel® Arria® 10 GX FPGA Development Kit, use the Guide: Configuring the Intel Arria 10 GX FPGA Development Kit for the Intel FPGA SDK for OpenCL. Do not follow the steps in the rest of the document. Instead, return to this document upon completing the specified section.

Image 1

Program the Intel® Arria® 10 GX FPGA Development Kit

NOTE: You need to do this only once, after you set up the FPGA board.

  1. Use one of two links to download the Intel® Quartus® software, depending on the version you want:
  2. Go to the Downloads directory or the directory to which you downloaded the Intel® Quartus® package. This document assumes the software is in Downloads:
    cd ~/Downloads
  3. Use the command for the package you downloaded:
    • Option 1: Intel® Quartus® Pro:
      sudo chmod +x QuartusProProgrammerSetup-17.1.0.240-linux.run
    • Option 2: Intel® Quartus® Lite:
      chmod +x QuartusProgrammerSetup-17.1.0.590-linux.run
  4. Run the Intel® Quartus® Installer
    sudo ./Quartus.<version>.run
  5. Click through the installer to the end. Remove the checkmarks from all boxes at the end of the installation.
    By default, the software is installed under /home/user. We suggest changing this directory to /opt/altera during the installation. A subdirectory is created under this directory, with the name dependent on your version of Intel® Quartus®:
    • Intel® Quartus® Pro: /opt/altera/intelFPGA_pro/17.1
    • Intel® Quartus® Lite: /opt/altera/intelFPGA/17.1
  6. Download fpga_support_files.tgz from the Intel Resource Center. The files in this .tgz are required to ensure your FPGA card and OpenVINO™ work correctly.
  7. Go to the directory where you downloaded fpga_support_files.tgz
  8. Unpack the .tgz file.
    tar -xvzf fpga_support_files.tgz
    A directory named fpga_support_files is created.
  9. Go to the fpga_support_files directory:
    cd fpga_support_files
  10. Copy setup_env.sh to your home directory:
    cp config/setup_env.sh /home/<user>
    source /home/<user>/setup_env.sh
  11. Configure the FPGA Driver Blacklist:
    sudo mv config/blacklist-altera-cvp.conf /etc/modprobe.d
  12. Copy the USB rules:
    sudo cp config/51-usbblaster.rules /etc/udev/rules.d/
  13. Load the USB rules:
    sudo udevadm control --reload-rules && udevadm trigger
  14. Unplug and replug the Micro-USB cable from the Intel Arria 10 GX board for JTAG
  15. [OPTIONAL] validating the cable is connected
    lsusb | grep Altera

    You should see a message similar to:

    Bus 001 Device 005: ID 09fb:6010 Altera
  16. Run jtagconfig:
    jtagconfig

    Your output is similar to:

    USB-BlasterII [1-14]
    02E660DD 10AX115H1(.|E2|ES)/10AX115H2/..
    020A40DD 5M(1270ZF324|2210Z)/EPM2210
  17. Use jtagconfig to slow the clock:
    jtagconfig --setparam 1 JtagClock 6M
  18. (OPTIONAL) Confirm the clock is set to 6M:
    jtagconfig --getparam 1 JtagClock

    You should expect to see the following:

    6M
  19. Go to the config directory:
    cd config
  20. Use Intel® Quartus® software to program top.sof and max5_150.pof. These files are from fpga_support_files.tgz:
    quartus_pgm -c 1 -m JTAG -o "p;max5_150.pof@2"
    quartus_pgm -c 1 -m JTAG -o "p;top.sof"
  21. Restart your computer:
    reboot
  22. Verify you successfully programmed top.sof
    sudo lspci |grep Alt

    If successful, you see a response similar to:

    01:00.0 Processing accelerators: Altera Corporation Device 2494 (rev 01)

NOTE: You will finish setting up the card after you install Intel Distribution of OpenVINO toolkit.

Install the Intel Distribution of OpenVINO toolkit for Linux* with FPGA Support Core Components

NOTE: An Internet connection is required to complete these steps.

If you do not have a copy of the Intel Distribution of OpenVINO toolkit package file, download it from Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA Support.

NOTE: You will need to select the Intel Distribution of OpenVINO toolkit for Linux* with FPGA Support package version from the drop down.

If you have a previous version of the Intel Distribution of OpenVINO toolkit installed, rename or delete two directories:

  1. Open Terminal*, or your preferred console application
  2. Go to the location where you downloaded the Intel Distribution of OpenVINO toolkit for Linux* with FPGA Support package file.
    If you downloaded the package file to the current user's Downloads directory:
    cd ~/Downloads/
    By default, the file is saved as l_openvino_toolkit_fpga_p_<version>.tgz
  3. Unpack the .tgz file:
    tar -xvzf l_openvino_toolkit_fpga_p_<version>.tgz
    The files are unpacked to l_openvino_toolkit_fpga_p_<version>
  4. Go to the l_openvino_toolkit_fpga_p_<version> directory:
    cd l_openvino_toolkit_fpga_p_<version>
  • /home/<user>/inference_engine_samples
  • /home/<user>/openvino_models

Installation Notes:

  • Choose an installation option and run the related script with root or regular user privileges. The default installation directory depends on the privileges you choose for the installation.
  • You can use either a GUI installation wizard or command-line instructions. The only difference between the two options is that the command-line instructions are text-based. This means that instead of clicking options in a GUI, command-line prompts ask for input on a text screen.
  1. Choose your installation option:
    • Option 1: GUI Installation Wizard:
      sudo ./install_GUI.sh
    • Option 2: Command-line Instructions:
      sudo ./install.sh
  2. Follow the instructions on your screen. Watch for informational messages such as the following in case you must complete additional steps:

    Image 4

  3. If needed, change the components you want to install or the installation directory. Pay attention to the installation directory. You will need this information later. The Installation summary GUI screen looks like this if you select the default options:

    Image 5

    • If you used root privileges to run the installer, it installs the Intel Distribution of OpenVINO in this directory: /opt/intel/computer_vision_sdk_fpga_<version>/

      For simplicity, a symbolic link to the latest installation is also created: /opt/intel/computer_vision_sdk/

    • If you used regular user privileges to run the installer, it installs the Intel Distribution of OpenVINO in this directory: /home/<user>/intel/computer_vision_sdk_fpga_<version>/

      For simplicity, a symbolic link to the latest installation is also created: /home/<user>/intel/computer_vision_sdk/


  4. A Complete screen indicates the first part of installation is done. Write down the software version number, beginning with the year:

    Image 6

The first core components are installed. Continue to the next section to install additional dependencies.

Install External Software Dependencies

  1. Change to the install dependencies directory:
    cd /Downloads/l_openvino_toolkit_fpga_p_
  2. Run a script to download and install the external software dependencies:
    sudo -E ./install_cv_sdk_dependencies.sh
    These dependencies are required for:
    • Intel-optimized OpenCV 3.4
    • Deep Learning Inference Engine
    • Deep Learning Model Optimizer tools

Configure the Model Optimizer

The Model Optimizer is a Python*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, and Apache MXNet*.

The Model Optimizer is a key component of the Intel Distribution of OpenVINO toolkit. You cannot do inference on your trained model without running the model through the model optimizer. When you run a pre-trained model through the Model Optimizer, your output an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model:

  • .xml: Describes the network topology
  • .bin: Contains the weights and biases binary data

Click to read more information about the Model Optimizer.

Model Optimizer Configuration Steps

You can choose to either configure all supported frameworks at once, or configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.

NOTE: If you did not install the Intel Distribution of OpenVINO to the default install directory, replace opt/intel/ with the directory in which you installed the software.

NOTE: Configuring the Model Optimizer requires an internet connection.

Option 1: Configure all supported frameworks at the same time

  1. Go to the Model Optimizer prerequisites directory:
    cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/install_prerequisites
  2. Run the script to configure the Model Optimizer for Caffe, TensorFlow, MXNet, Kaldi, and ONNX:
    sudo ./install_prerequisites.sh

Option 2: Configure each framework separately

  1. Go to the Model Optimizer prerequisites directory:
    cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/install_prerequisites
  2. Run the script for your model framework. You can run more than one script:
    • Caffe*
      sudo ./install_prerequisites_caffe.sh
    • Kaldi*
      sudo ./install_prerequisites_kaldi.sh
    • MXNet*
      sudo ./install_prerequisites_mxnet.sh
    • ONNX*
      sudo ./install_prerequisites_onnx.sh
    • TensorFlow*
      sudo ./install_prerequisites_tf.sh

    The Model Optimizer is configured for one or more frameworks.

Complete Intel® Arria® 10 GX FPGA Development Kit Setup

  1. Switch to superuser:
    sudo su
  2. Use the setup_env.sh script from fpga_support_files.tgz to set your environment variables.
    source /home/<user>/Downloads/fpga_support_files/setup_env.sh
  3. Change directory to Downloads/fpga_support_files/
    cd /home/<user>/Downloads/fpga_support_files/
  4. Run fpga dependencies script which allows OpenCL to support Ubuntu and recent kernels:
    ./install_openvino_fpga_dependencies.sh 

    NOTE: If you installed the 4.14 kernel, you will need to reboot the machine and select the new kernel in the Ubuntu (grub) boot menu. You will also need to redo steps 1 & 2 to set up your environmental variables again.

  5. Install OpenCL devices. Enter Y when prompted to install:
    aocl install
  6. Run aocl diagnose
    aocl diagnose
    Your screen displays "DIAGNOSTIC_PASSED".
  7. Exit super user
    exit

You completed the FPGA installation and configuration.

NOTE: If the system is shut down, you must reconfigure, reprogram and re-set up the Intel® Arria® 10 GX FPGA Development Kit. To avoid redoing these steps, use the AOCL Flash instructions.

You are ready to verify the installation running the demo scripts.

Run the Demos to Verify the Installation and Compile the Samples

To check the installation, run the demo applications provided with the product on the CPU using the following instructions.

  1. Go to the Inference Engine demo directory:
    cd /opt/intel/computer_vision_sdk/deployment_tools/demo
  2. Run the Image Classification demo script.

    This demo uses the Model Optimizer to convert a SqueezeNet model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.

    This demo also builds the package sample applications

    ./demo_squeezenet_download_convert_run.sh

    This demo uses the cars.png image in the demo directory. When the demo completes, you will have the label and confidence for the top-10 categories.

    Image 7

  3. Run the Inference Pipeline demo script.

    This demo uses the car.png image in the demo directory to show an inference pipeline using three of the pre-trained models. The demo uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.

    First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.

    This demo also builds the sample applications included in the package.

    ./demo_security_barrier_camera.sh

    When the demo completes, you see an image that displays the resulting frame with detections rendered as bounding boxes, and text.

    Image 8

  4. Close the image viewer window to complete the demo.

To learn more about the demo applications, see the README.txt file in /opt/intel/computer_vision_sdk/deployment_tools/demo

For a description of the OpenVINO™ pre-trained object detection and object recognition models provided with the package, go to /opt/intel/computer_vision_sdk/deployment_tools/intel_models/ and open index.html.

In this section, you saw a preview of the OpenVINO™ capabilities.

You have completed all required installation, configuration, and build steps in this guide to use your CPU to work with your trained models.

NOTE: If you are migrating from the Intel® Computer Vision SDK 2017 R3 Beta version to the Intel Distribution of OpenVINO, read this information about porting your applications.

 

Program a Bitstream

The bitstream you program should correspond to the topology you want to deploy. In this section, you program a Squeezenet bitstream and deploy the classification sample with a Squeezenet model that you used the Model Optimizer to convert in the demo above.

IMPORTANT: Only use bitstreams from the installed version of the OpenVINO™ toolkit. Bitstreams from older versions of the OpenVINO™ toolkit are incompatible with later versions of the OpenVINO™ toolkit. For example, you cannot use the 1-0-1_A10DK_FP16_Generic bitstream, when the OpenVINO™ toolkit supports the 2-0-1_A10DK_FP16_Generic bitstream.

There are different folders for bitstreams for each FPGA card type which were downloaded in the OpenVINO package.

For the Intel Arria 10GX DevKit FPGA, the pre-trained bitstreams are in/opt/intel/computer_vision_sdk/a10_devkit_bitstreams This demo uses a Squeezenet bitstream with low precision for the classification sample.

For the Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA the pre-trained bistreams are in/opt/intel/computer_vision_sdk_2018.4.420/bitstreams/a10_vision_design_bitstreams This demo uses a Squeezenet bitstream with low precision for the classification sample.

In order to avoid having to reprogram the board after a power down, a bitstream will be programmed to permanent memory on the Intel Intel® Arria® 10 GX FPGA Development Kit. This will take about 20 minutes.

NOTE: The following instructions 4-7 only need to be done once for a new Intel Arria 10 Dev Kit

Your output is similar to:

USB-BlasterII [1-14]
02E660DD 10AX115H1(.|E2|ES)/10AX115H2/..
020A40DD 5M(1270ZF324|2210Z)/EPM2210
  1. Rerun the environment setup script.
    source /home/<user>/Downloads/fpga_support_files/setup_env.sh
  2. Change to your home directory
    cd /home/<user>
  3. Program the bitstream for Intel® Arria® 10 FPGA
    aocl program acl0 /opt/intel/computer_vision_sdk/a10_devkit_bitstreams/2-0-1_A10DK_FP11_SqueezeNet.aocx
  4. Program the bitstream for the Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA
    aocl program acl0 /opt/intel/computer_vision_sdk_2018.4.420/bitstreams/a10_vision_design_bitstreams/ 4-0_A10DK_FP11_SqueezeNet.aocx
  5. Plug in the Micro USB cable to the card and your host system.
  6. Run jtagconfig to ensure that the cable is properly inserted:
    jtagconfig
  7. Use jtagconfig to slow the clock:
    jtagconfig --setparam 1 JtagClock 6M
  8. Store the bitstream long term on the board:
    aocl flash acl0 /opt/intel/computer_vision_sdk/a10_devkit_bitstreams/2-0-1_A10DK_FP11_Sque

Setup a Neural Network Model for FPGA

In this section, you create an FP16 model suitable for hardware accelerators. For more information, see the information about FPGA plugins in the Inference Engine Developer Guide

  1. Make a directory for the FP16 SqueezeNet Model:
    mkdir /home/<user>/squeezenet1.1_FP16
  2. Go to /home/<user>/squeezenet1.1_FP16:
    cd /home/<user>/squeezenet1.1_FP16
  3. Use the Model Optimizer to convert an FP16 Squeezenet Caffe model into an optimized Intermediate Representation (IR):
    python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py --input_model /home/<user>/openvino_models/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir .
  4. The squeezenet1.1.labels file contains the classes ImageNet uses. This file is included so that the inference results show text instead of classification numbers. Copy squeezenet1.1.labels to the your optimized model location:
    cp /home/<user>/openvino_models/ir/squeezenet1.1/squeezenet1.1.labels  .
  5. Copy a sample image to the release directory. You will use this with your optimized model:
    sudo cp /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png  ~/inference_engine_samples/intel64/Release

Continue to the next section to run a sample application.

Run a Sample Application

1. Go to the samples directory

cd /home/<user>/inference_engine_samples/intel64/Release

2. Use an Inference Engine sample to run a sample application on the CPU:

./classification_sample -i car.png -m ~/openvino_models/ir/squeezenet1.1/squeezenet1.1.xml

Note the CPU throughput in Frames Per Second (FPS). This tells you how quickly the inference is done on the hardware. Now run the inference using the FPGA.

3. Add the -d option to target the FPGA:

./classification_sample -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d HETERO:FPGA,CPU

The throughput on FPGA is listed and may show a lower FPS. This is due to the initialization time. To account for that, the next step increases the iterations to get a better sense of the speed the FPGA can run inference at.

4. Use -ni to increase the number of iterations, This option reduces the initialization impact:

./classification_sample -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d HETERO:FPGA,CPU -ni 100

You are done with the OpenVINO installation for FPGA. Continue to the next step to use the Hello World tutorial.

Hello World Face Detection Tutorial

Use the OpenVINO with FPGA Hello World Face Detection Exercise to learn more about how the software and hardware work together.

Additional Resources

Intel® Distribution of OpenVINO™ home page: https://software.intel.com/en-us/openvino-toolkit

Intel® Distribution of OpenVINO™ toolkit documentation: https://software.intel.com/en-us/openvino-toolkit/documentation/featured

Inference Engine FPGA plugin documentation: https://software.intel.com/en-us/articles/OpenVINO-InferEngine#fpga-plugin

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
United States United States
You may know us for our processors. But we do so much more. Intel invents at the boundaries of technology to make amazing experiences possible for business and society, and for every person on Earth.

Harnessing the capability of the cloud, the ubiquity of the Internet of Things, the latest advances in memory and programmable solutions, and the promise of always-on 5G connectivity, Intel is disrupting industries and solving global challenges. Leading on policy, diversity, inclusion, education and sustainability, we create value for our stockholders, customers and society.
This is a Organisation

42 members

Comments and Discussions

 
-- There are no messages in this forum --