Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the operation method to limit the occupation of GPU and CPU by running tensorflow python programs?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "what is the operation method of running tensorflow python program to limit the occupation of GPU and CPU". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let Xiaobian take you to learn "what is the operation method of running tensorflow python program to limit the occupation of GPU and CPU"!

In general, when running tensorflow, the default will occupy all the GPUs that can be seen, which will cause other users or programs to have no GPUs available, so it is necessary to limit the program's occupation of GPUs. Moreover, in general, our programs do not use all GPU resources, but forcibly occupy them. Most of the resources will not be used, nor will they increase the running speed.

Use nvidia-smi to view the GPU usage of this machine, as shown in the figure below, here you can see that the GPU model of this machine is K80, there are two K80, four pieces are available (one K80 includes two K40).

1. If you only need to use a certain block or several GPUs, you can run the program with the following command: CUDA_VISIBLE_DEVICES=0,1 python test.py

This means that only GPU 0 and 1 are visible to the program, thus limiting the program to GPU 0 and 1.

Also, you can specify in your code

import osos.environ["CUDA_VISIBLE_DEVICES"] = "0,1"

If you want to run the program without CPU only, you can use the following command (all GPUs are invisible):

CUDA_VISIBLE_DEVICES='' python test.py

or

CUDA_VISIBLE_DEVICES="-1" python test.py

2. Let tensorflow request memory only on demand, as shown in the following code

#only minimum use gpugpu_config = tf.ConfigProto()gpu_config.gpu_options.allow_growth = Truewith tf.Session(config = gpu_config) as sess:

The first is the limitation on GPU, so what if we don't use GPU and only use CPU? How to limit CPU usage?

As mentioned earlier, if you use CUDA_VISIBLE_DEVICES=""python test.py you can use only CPU, what if you want to use only part of CPU? Can be restricted by the following code

cpu_config = tf.ConfigProto(intra_op_parallelism_threads = 8, inter_op_parallelism_threads = 8, device_count = {'CPU': 8})with tf.Session(config = cpu_config) as sess:

At this point, I believe that everyone has a deeper understanding of "what is the operation method of running tensorflow python program to limit the occupation of GPU and CPU". Let's actually operate it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report