Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to install the OpenVINO ™toolkit for Linux* from a Docker image

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "how to install the OpenVINO ™toolkit for Linux* from the Docker image", which is easy to understand and well-organized, hoping to help you solve your doubts. Let the editor lead you to study and learn how to install the OpenVINO ™toolkit for Linux* from the Docker image "this article.

Intel ®Distribution of OpenVINO ™tool Suite enables rapid deployment of applications and solutions comparable to human vision. The tool kit is based on complex neural networks (CNN) and extends computer Vision (CV) workloads with Intel ®hardware to maximize performance. Intel ®Distribution of OpenVINO ™tool kit includes Intel ®Deep Learning Deployment Toolkit.

This guide provides steps for creating a Docker* image through the Intel ®Distribution of OpenVINO ™tool suite for Linux* and for further installation.

System requirements

Target operating system

Ubuntu* 18.04 long term support (LTS), 64 bit

Host operating system

Installed the GPU driver and Linux of the Linux kernel supported by the GPU driver

Using Docker* Images for CPU

The kernel reports to all containers the same information as local applications, such as CPU and memory information.

Instructions available to all host processes are available to processes in the container, including AVX2, AVX512, and so on. There are no restrictions.

Docker* does not use virtualization or emulation. A process in Docker* is just a regular Linux process, but it is isolated from its outside world at the kernel level. The performance loss is very small.

Build a Docker* image for CPU

To build a Docker image, create a Dockerfile that contains the definition variables and commands necessary to create an installation image for the OpenVINO toolkit.

Use the following sample as a template to create your Dockerfile:

Click to expand / collapse

Note: replace the direct link to the Intel ®Distribution of OpenVINO ™tool suite in the package_url argument with the latest version. You can copy the link from the Intel ®Distribution of OpenVINO ™tool Suite download page after you have completed your registration. Right-click the offline installer button on the downloaded page of the Linux version in your browser, and then press the copy link address.

You can choose which OpenVINO components will be installed by modifying the COMPONENTS parameter in the silent.cfg file. For example, if you are only going to install the CPU runtime for the inference engine, set COMPONENTS=intel-openvino-ie-rt-cpu__x86_64 in silent.cfg.

For a complete list of installable components, run the. / install.sh-- list_components command from the extracted OpenVINO ™tool package.

To build a Docker* image for CPU, run the following command:

Docker build. -t\

-- build-arg HTTP_PROXY=\

-- build-arg HTTPS_PROXY=

Run the Docker* image for CPU

Run the image using the following command:

Docker run-it

Using Docker* Image for GPU to build Docker* Image for GPU

Prerequisites:

GPU is not available in the container by default, and you must attach it to the container first.

The kernel driver must be installed on the host.

The Intel ®OpenCL ™runtime package must be included in the container.

In the container, the user must be in the video group.

Before building the Docker* image on GPU, add the following command to the above CPU Dockerfile example:

WORKDIR / tmp/opencl

RUN usermod-aG video openvino

RUN apt-get update & &\

Apt-get install- y-no-install-recommends ocl-icd-libopencl1 & &\

Rm-rf / var/lib/apt/lists/* & &\

Curl-L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-gmmlib_19.3.2_amd64.deb"-output" intel-gmmlib_19.3.2_amd64.deb "& &\

Curl-L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-core_1.0.2597_amd64.deb"-output" intel-igc-core_1.0.2597_amd64.deb "& &\

Curl-L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-opencl_1.0.2597_amd64.deb"-output" intel-igc-opencl_1.0.2597_amd64.deb "& &\

Curl-L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-opencl_19.41.14441_amd64.deb"-output" intel-opencl_19.41.14441_amd64.deb "& &\

Curl-L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-ocloc_19.04.12237_amd64.deb"-output" intel-ocloc_19.04.12237_amd64.deb "& &\

Dpkg-I / tmp/opencl/*.deb & &\

Ldconfig & &\

Rm / tmp/opencl

To build a Docker* image for GPU, run the following command:

Docker build. -t\

-- build-arg HTTP_PROXY=\

-- build-arg HTTPS_PROXY=

Run the Docker* image for GPU

To make GPU available in the container, use the-- device / dev/dri option to attach the GPU to the container, and then run the container:

Docker run-it-- device / dev/dri

To use Docker* images for Intel ®Movidius ™neural computer rods and Intel ®neural computer rods 2 to build Docker* images for Intel ®Movidius ™neural computer rods and Intel ®neural computer rods 2

Use the same steps as CPU to build the Docker image.

To run a Docker* image for Intel ®Movidius ™neural computer rods and Intel ®neural computer rods 2

Known limitations:

The Intel ®Movidius ™Neurocomputer Rod device changes VendorID and DeviceID during execution, and each time it looks for the mainframe system as a brand new device. This means that it cannot be loaded as usual.

The UDEV event cannot be passed forward to the container by default, and it does not know about the reconnection of the device.

Only one device per host is supported.

Use one of the following options to run a possible solution for Intel Movidius neural computer rods:

Solution # 1:

Remove UDEV by rebuilding libusb that does not need UDEV support in the Docker* image (add the following command to the above Dockerfile example for CPU):

RUN usermod-aG users openvino

WORKDIR / opt

RUN curl-L https://github.com/libusb/libusb/archive/v1.0.22.zip-output v1.0.22.zip & &\

Unzip v1.0.22.zip

WORKDIR / opt/libusb-1.0.22

RUN. / bootstrap.sh & &\

. / configure-disable-udev-enable-shared & &\

Make-J4

RUN apt-get update & &\

Apt-get install- y-- no-install-recommends libusb-1.0-0-dev=2:1.0.21-2 & &\

Rm-rf / var/lib/apt/lists/*

WORKDIR / opt/libusb-1.0.22/libusb

RUN / bin/mkdir-p'/ usr/local/lib' & &\

/ bin/bash.. / libtool-- mode=install / usr/bin/install-c libusb-1.0.la'/ usr/local/lib' & &\

/ bin/mkdir-p'/ usr/local/include/libusb-1.0' & &\

/ usr/bin/install-c-m 644 libusb.h'/ usr/local/include/libusb-1.0' & &\

/ bin/mkdir-p'/ usr/local/lib/pkgconfig'

WORKDIR / opt/libusb-1.0.22/

RUN / usr/bin/install-c-m 644 libusb-1.0.pc'/ usr/local/lib/pkgconfig' & &\

Ldconfig

Run the Docker* image:

Solution # 2: run the container in privileged mode, enable the Docker network configuration as a host, and load all devices into the container:

Docker run-privileged-v / dev:/dev-network=host

Note:

It's not safe.

Conflicts with Kubernetes* and other tools that use choreography and private networks

Docker run-- device-cgroup-rule='c 189 rmw' * rmw'-v / dev/bus/usb:/dev/bus/usb

Using Docker* Image for Intel ®Vision Accelerator Design with Intel ®Movidius ™Vision processor to build Docker* Image for Intel ®Vision Accelerator Design with Intel ®Movidius ™Vision processor

To use the Docker container for inference on Intel ®Vision Accelerator Design with the Intel ®Movidius ™vision processor:

Set up the environment on the host, which will be used to run Docker*. The hddldaemon needs to be executed, which is responsible for the communication between the HDDL plug-in and the motherboard. To learn how to set up the environment (the OpenVINO package must be pre-installed), see the Intel ®Vision Accelerator Design configuration Guide with the Intel ®Movidius ™Vision processor.

Prepare the Docker* image. As a base image, you can use the image in the section Building Docker Images for CPU. To use it for inference on Intel ®Vision Accelerator Design with Intel ®Movidius ™vision processors, you need to rebuild the image by adding the following dependencies:

RUN apt-get update & &\

Apt-get install- y-- no-install-recommends\

Libboost-filesystem1.65-dev=1.65.1+dfsg-0ubuntu5\

Libboost-thread1.65-dev=1.65.1+dfsg-0ubuntu5\

Libjson-c3=0.12.1-1.3libxxf86vm-dev=1:1.1.4-1 & &\

Rm-rf / var/lib/apt/lists/*

Use the following command to run hddldaemon on the host with a separate terminal process:

$HDDL_INSTALL_DIR/hddldaemon

Run the Docker* image for Intel ®Vision Accelerator Design with Intel ®Movidius ™vision processor

To run the built Docker* image for Intel ®Vision Accelerator Design with the Intel ®Movidius ™vision processor, use the following command:

Docker run-- device=/dev/ion:/dev/ion-v / var/tmp:/var/tmp-ti

Note:

The device / dev/ion needs to be shared before ion buffers can be used in plug-ins, hddldaemon, and kernel.

Since independent inference tasks share the same HDDL service communication interface (the service creates mutexes and slot files in / var/tmp), / var/tmp needs to be loaded and shared among them.

In some cases, the ion driver is not enabled (for example, because the newer kernel version or iommu is not compatible). The output returned by lsmod | grep myd_ion is empty. To resolve this issue, use the following command:

Docker run-rm-net=host-v / var/tmp:/var/tmp-ipc=host-ti

Note:

When building the Docker image, create a user in the docker file so that their UID and GID are the same as the user running hddldaemon on the host.

Run the application in Docker as that user.

Alternatively, you can start hddlaemon as root on the host, but this method is not recommended.

Using Docker* Image for FPGA to build Docker* Image for FPGA

The FPGA card is not available in the container by default, but it can be loaded into it under the following prerequisites:

The FPGA device is ready to run inference.

The FPGA bitstream has been pushed to the device through PCIe.

To build a Docker* image for FPGA:

Set additional environment variables in the following Dockerfile:

ENV CL_CONTEXT_COMPILER_MODE_INTELFPGA=3

ENV DLA_AOCX=/opt/intel/openvino/a10_devkit_bitstreams/2-0-1_RC_FP11_Generic.aocx

ENV PATH=/opt/altera/aocl-pro-rte/aclrte-linux64/bin:$PATH

Install the following UDEV rules:

Cat

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report