In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to use ulimit under Linux to improve system performance", the content is easy to understand, clear, hope to help you solve your doubts, the following let Xiaobian lead you to study and learn "how to use ulimit to improve system performance under Linux" this article.
Linux for each user, the system limits the maximum number of processes. In order to improve performance, we can set the maximum number of processes for each linux user according to the device resources. We can use ulimit to display the current user process restrictions. Ulimit is a built-in command for shell, which can be used to control the resources of shell program execution.
How to use ulimit
Ulimit manages different kinds of system resources through a number of parameter options. In this section, we will explain the use of these parameters.
The format of the ulimit command is:
$ulimit [options] [limit] [options]-H: set the hard resource limit, which cannot be increased once set. For example, ulimit-Hs 64; limits hard resources, and the thread stack size is 64K. -S: set the soft resource limit, which can be increased after setting, but cannot exceed the hard resource setting. For example, ulimit-Sn 32; limits soft resources, 32 file descriptors. -a: displays all current limit information. For example, ulimit-a; displays all current limit information. -c: the size of the largest core file, in blocks. For example, ulimit-c unlimited; does not limit the size of the generated core file. -d: the size of the largest segment of the process, in Kbytes. For example, ulimit-d unlimited; does not limit the segment size of a process. -f: the maximum value, in blocks, that a process can create a file. For example, ulimit-f 2048; limits the maximum file size that a process can create to 2048 blocks. -l: the maximum lockable memory size in Kbytes. For example, ulimit-l 32; limits the maximum lockable memory size to 32Kbytes. -m: maximum memory size, in Kbytes. For example, ulimit-m unlimited; does not limit the maximum memory. -n: the maximum number of file descriptors that can be opened. For example, ulimit-n 128; limits the use of up to 128 file descriptors. -p: the size of the pipe buffer, in Kbytes. For example, ulimit-p 512; limits the size of the pipe buffer to 512Kbytes. -s: thread stack size, in Kbytes. For example, ulimit-s 512; limits the size of the thread stack to 512Kbytes. -t: the maximum CPU elapsed time in seconds. For example, ulimit-t unlimited; does not limit the maximum CPU elapsed time. -u: the maximum number of processes available to the user. For example, ulimit-u 64; limits users to a maximum of 64 processes. -v: the maximum virtual memory available to the process, for example, in Kbytes. For example, ulimit-v 200000; limit the maximum available virtual memory to 200000Kbytes. Ulimit uses the valid scope of the instance user process
As a kind of work to restrict the use of resources, ulimit has its scope of function. So, is it limited to a single user, a single process, or the entire system? In fact, ulimit restricts the current shell process and its derived child processes. For example, if a user runs two shell terminal processes at the same time and only executes ulimit-s 100 in one of the environments, the size of the file created in that shell process is limited accordingly, while the other shell terminal, including the subprograms running on it, will not be affected by it.
Shell 1
$ll-h newfile-rw-r--r--. 1 root root 223K April 23 09:16 newfile$ ulimit-f 100$ cat newfile > shell1File size limit exceeded (core dumped) $ll-h shell1-rw-r--r--. 1 root root 100K April 23 09:20 shell1
Shell 2
$cat newfile > shell2 $ll-d shell2-rw-r--r--. 1 root root 227690 April 23 09:23 shell2 $ll-h shell2-rw-r--r--. 1 root root 223K April 23 09:23 shell2
So, is there a way to limit the resources of a specific user? The answer is yes, effective temporarily (there is no limit on the open file size limit):
$ulimit-f unlimited
Or by modifying the / etc/security/limits.conf configuration file of the system. This file can limit not only the resource use of the specified user, but also the resource use of the specified group. Each line of the file is a description of the qualification in the following format:
Domain represents the name of a user or group, and you can also use * as a wildcard. Type can have two values, soft and hard. Item indicates the resources that need to be limited, and there can be many candidates, such as stack,cpu,nofile, etc., which represent the maximum stack size, cpu time consumed, and the number of files opened. By adding a corresponding line of description, the corresponding restrictions can be created. For example:
* hard noflle 100
The line configuration statement limits the maximum number of files that any user can create is 100. Now it is possible to restrict the resources of the process and the user separately, which seems to be enough, but it is not. Many applications need to make a general restriction on the use of resources in the whole system, so we need to modify the configuration file under / proc. The / proc directory contains many parameters about the current state of the system, such as / proc/sys/kernel/pid_max,/proc/sys/net/ipv4/ip_local_port_range, etc., and you can roughly guess the limited types of resources from the name of the file. As there are so many files involved in this directory, I will not introduce them one by one here. Interested readers can open the relevant documents to consult the instructions.
Use ulimit to limit the memory usage of shell
In this section, we show the reader how to limit the memory used by shell by using the-dmam and-v options. First, let's take a look at invoking the ls command when the ulimit limit is not set:
$ll shell1-1 RW, RW, RMI, Rafe, Rafe. 1 root root 227690 April 23 09:16 shell1
You can see that the ls command is running normally at this time. Set the ulimit as follows:
$ulimit-d 1000-m 1000-v 1000 here review the meaning of the three options described in the previous section:-d: set the maximum value of the data segment. Unit: KB. -m: sets the maximum amount of resident memory that can be used. Unit: KB. -v: sets the maximum value of virtual memory. Unit: KB.
With the ulimit settings above, we have limited the maximum memory available to the current shell to less than 1000KB. Let's see what happens when you run the ls command:
$ll shell1-lSegmentation fault (core dumped) uses ulimit to limit the number of socket that the program can create
Consider a real-world need. For a server program in the C client S model, it creates multiple socket ports to respond to multiple socket program requests. If there happens to be a large number of client making requests to server at the same time, then server will need to create a large number of socket connections. But under Linux, all resources are files, ordinary files are files, disk printers are files, and socket is also files. To create a new socket connection under Linux is actually to create a new file descriptor. However, Linux has a limit on the number of file descriptors that can be opened by a single process, and the maximum number of files that can be opened by a default single process is 1024. There is no option for ulimit to directly limit the number of socket. However, we have the option-n, which is used to limit the maximum value of file descriptors that a process can open. As follows (view the file descriptor information currently open by a process):
The total amount of $ll / proc/36766/fd is 0lr / proc/36766/fd. 1 root root 64 April 23 09:31 0-> / dev/nulll-wx-. 1 root root 64 April 23 09:31 1-> / mydata/localhost.localdomain.errlrwx-. 1 root root 64 April 23 09:31 10-> / mydata/ib_logfile1lrwx-. 1 root root 64 April 23 09:31 11-> socket: [115703] lrwx-. 1 root root 64 April 23 09:31 12-> / tmp/ibLxLFBt (deleted) llywxmuri. 1 root root 64 April 23 09:31 13-> / mydata/mysql-bin.000001lrwx-. 1 root root 64 April 23 09:31 14-> socket: [115704] lrwx-. 1 root root 64 April 23 09:31 15-> / mydata/mysql/host.MYI...
Therefore, we can limit the maximum number of file descriptors a process can open by using ulimit-n. The default single process opens a file descriptor of 1024, which means that a single process can only hold a maximum of 1024 or less at the same time (because handles to other files are opened). If you start four processes to maintain user links, the number of connections that the whole application can maintain at the same time will not exceed 41024, that is, it can only support a maximum of 4 × 1024 users online. You can increase this setting so that the service can maintain more TCP connections, thus limiting the number of socket creation.
If the number of file handles opened by a single process exceeds the system-defined value, the error prompt for "too many files open" is mentioned. How do you know how many file handles are open by the current process? The lsof command can help you to see:
$lsof-n | awk'{print $2}'| sort | uniq-c | sort-nr | head-n 2126 7015 93 1831
As explained above, 7015 process opened 126 file descriptors, you can see what service 7015 process is through the ps command (here are my examples, you should check according to your own process during the experiment, I believe you have this consciousness).
Modify the maximum number of files that a single process can open
1) ulimit-n 102400
This is only valid on the current terminal, and after exiting, open files becomes the default again.
2) write ulimit-n 102400 to / etc/profile so that / etc/profile will be executed automatically every time you log in to the terminal.
3) if you want the value of the modified open files to take effect forever, you must modify the configuration file: / etc/security/limits.conf add:
* soft nofile 1024000 * hard nofile 1024000root soft nofile 1024000root hard nofile 1024000 is all the contents of this article "how to use ulimit to improve system performance under Linux". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.