Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Why use the BPF tool to analyze performance?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Performance tools use extended BPF in part because of its programmability. The BPF program can perform custom wait time calculations and statistical summaries. These features alone can make an interesting tool, and there are many other tracking tools with these features. What makes BPF unique is that it is also efficient and safe in production, and is built into the Linux kernel. With BPF, you can run these tools in a production environment without adding any new kernel components.

Let's look at some output and a chart to see how the performance tool uses BPF. This example comes from an early BPF tool I released, bitehist, which shows the size of disk Icano in a histogram:

The key change is that the histogram can be generated in the kernel context, which greatly reduces the amount of data copied to user space. This efficiency improvement is so great that it allows tools to run in production, otherwise they will be too expensive. In detail:

Before BPF, the complete steps to generate this histogram summary are:

1. In the kernel: enable detection for disk I / O events.

two。 In the kernel, for each event: a record is written to the perf buffer. If you use a trace point (preferred), the record contains several fields of metadata about disk I / O.

3. In user space: periodically copy buffers for all events to user space.

4. In user space: iterate through each event, parsing event metadata for byte fields. Other fields will be ignored.

5. In user space: generates a histogram summary of byte fields.

Note: these are the best steps available, but they do not show the only way. You can install out-of-tree trackers (such as SystemTap), but depending on your kernel and distribution, this can be difficult. You can also modify kernel code or develop custom kprobe modules, but both approaches involve challenges and risks. I developed my own solution, called "hacktogram," which involves creating multiple perf (1) statistical counters with range filters for each row of the histogram [16]. That's too bad.

Steps 2 through 4 have high performance overhead for high I / O systems. Imagine transferring 10000 disk I / O traces per second to a user-space program for analysis and summarization.

The steps to use the BPF,bitesize program are:

1. In the kernel: enable detection of disk I / O events and attach a custom BPF program defined by bitesize.

two。 In the kernel, for each event: run the BPF program. It only takes the byte field and saves it to the custom BPF mapping histogram.

3. In user space: read the BPF map histogram at once and print it out.

This method avoids the overhead of copying events to user space and reprocessing them. It also avoids copying unused metadata fields. The only data copied to user space is shown in the previous output: the "count" column, which is an array of numbers.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report