In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you how to understand Linux fault location technology, the content is concise and easy to understand, it will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
The main purpose is to understand and learn the fault location technology in linux, which is divided into on-line fault location and off-line fault location.
1. Fault location (Debugging) scenario classification
In order to describe the problem, various software fault location situations on Linux are divided into two categories.
(1) online fault location
Online fault location (online-debugging) means that when a fault occurs, the operating system environment in which the fault occurs can still be accessed. Fault handlers can log on to the operating system through console, ssh, etc., and execute various operation commands or test programs on shell to observe, analyze and test the fault environment in order to locate the cause of the fault.
(2) offline fault location
Offline fault location (offline-debugging) means that when the fault occurs, the operating system environment in which the fault occurs can no longer be accessed normally, but when the fault occurs, all or part of the state of the system has been collected by the system itself or set in advance. Fault handlers can locate the cause of the fault by analyzing the collected fault location status information.
2. Application process failure situation and treatment
The failure of the application process generally does not affect the normal use of the operating system (if the bug of the application code leads to the crash or hang of the kernel, it belongs to the kernel loophole), so the method of on-line fault location can be used to analyze flexibly. There are several scenarios where the application code fails:
(1) abnormal termination of the process
Many users think that it is impossible to analyze the abnormal termination of the process, but in fact, there are traces of the abnormal termination of the process. All abnormal process termination is achieved by sending signals to specific processes or process groups through the kernel. Can be divided into several types to describe:
-SIGKILL. SIGKILL is the most special because the signal cannot be captured, and SIGKILL does not cause the terminated process to generate core files, but if it is really a SIGKILL issued by the kernel, the kernel must record the information in the dmesg. In addition, where SIGKILL is used in the kernel, such as oom_kill_process (), it is not difficult to analyze the reasons for recording and analyzing the code that uses SIGKILL in the kernel through dmesg.
-SIGQUIT, SIGILL, SIGABRT, SIGBUS, SIGFPE, SIGSEGV. In the case of retention, these signals will terminate the process and generate core files. According to the stack trace information in core, users can directly locate the code location that leads to the termination signal. In addition, SIGQUIT,SIGABRT is generally used by user code itself, and good code usually keeps a log. SIGILL, SIGBUS, SIGFPE, and SIGSEGV are all generated in the kernel. If you search the kernel source code, it is not difficult to list where these signals are used in the kernel, such as SIGILL is an illegal instruction, the code generated by floating-point operations may be caused by corrupted or the physical memory corruption; SIGBUS in the text area is mostly caused by MCE fault location; SIGSEGV is mostly caused by corrupted by pointer variables of application code. For the corrupted of the memory of the heap or stack of the application, you can use the valgrind tool to profile the application, and you can usually directly find the code that leads to corruption.
-SIGINT, SIGPIPE, SIGALRM, SIGTERM. These signals terminate the process on hold but do not produce core files. For these signals, it is recommended that users define a handler to record the context of the problem. What is easy to ignore is SIGPIPE. When using select () or poll (), many user programs only listen to read/write descriptors, not exception descriptors, and still write to socket when the other party's TCP has been closed, resulting in SIGPIPE.
-for malicious agent-generated process termination behavior, for example, in some cooperative processes, A sends SIGKILL to B without logging, or B directly judges certain conditions and calls exit () without logging. In the case of a large amount of applied code, it may be difficult to locate the fault by analyzing the code. SystemTap provides a better way to solve this problem, which is to write probes at the user level and track the process's use of system calls such as signal (), exit (), etc.
(2) the process is blocked and the application cannot proceed normally.
This situation is normal for a single blocked process, but an exception for applications with multiple processes as a whole. The application can not be advanced, which means that there is something wrong with one of the process advancing factors, causing other processes that depend on it to have to wait. Analyzing this situation requires a clear analysis of the dependencies between processes or events, as well as the flow of data processing. First of all, the execution path blocked by each process should be found out by using the back trace function of gdb-p to determine the location of the state machine in which each process is located.
Generally speaking, if you only consider the state of each process, there may be an interdependent circular relationship between processes, such as (P1 request = > P2 processing = > P2 response = > P1 re-request = > P2 processing = > P2 re-response), but applications generally deal with workload in the way of transaction or session. Each transaction has a starting point and an end point. We need to use tools such as strace, tcpdump and the application execution log to observe. The blocking position of the transaction being processed is analyzed, and the reason why all the state machines are blocked is found. There are many reasons that cause this state machine to stop running, such as problems with the remote end of communication with the application, problems with the back-end database / directory, and some process or thread of the application is in an abnormal blocking position or terminates directly, and no longer works normally.
(3) deadlock formed by user process
The user process forms a deadlock, and if there is no fault location on memory, it is entirely a logic problem of the application itself. A loop is formed between deadlocked processes or threads due to the mutual possession of locks. When this happens, the back trace function of gdb-p can directly determine that all deadlocked processes are blocked on lock-related system calls such as futex (). The path of these calls to futex () may be mutex, semaphore, conditional variable and other lock functions. By analyzing the code of call trace, we can directly determine all the locks that may already be held by each process when it is executed to this position. According to this modification of the code of the program, the deadlock loop can be eliminated, and the problem can be solved.
Note that memory failures can also lead to false deadlocks, such as physical memory failures can directly cause the value of the lock variable to be-1, so processes using the lock will block. If the bug of the code causes the memory corruption, you can use the valgrind tool inspector to find it. However, if the corruption is caused by the fault location of physical memory, hardware support is required. For high-end PC, such as MCE machines, exceptions or reports can be generated directly when the fault location of physical memory is located, but for low-end PC servers, there is no other way except to run the memtest tool for detection.
(4) the process cannot exit because it has been in a'D' (UnInterruptible) state for a long time.
This is mostly caused by faults in the kernel. The kernel puts the process in a'D' state in many execution paths to ensure that the critical execution path is not interrupted by external signals, resulting in unnecessary inconsistencies in the state of the data structure in the kernel. But in general, the process does not stay in the'D' state for too long, because the condition under which the state ends (such as a timer trigger)
The completion of the IO operation, etc.) will soon wake up the process. When a process is in echo 'state for a long time, the key is to find out where the code is blocked. The kernel execution stack of all sleeping processes in the system, such as kernel' t'> / proc/sysrq-trigger, can be printed directly by using the t key function of DOS, including the kernel state stack of processes with'D' state. After finding out the location of the code, the reason why the'D' state can not be exited can be directly analyzed, such as the IO read operation can not be completed due to hardware or nfs failure.
The reasons that may lead to the'D' state are complicated, for example, the exit of'D' depends on the value of a variable, and the value of the variable is dropped by the corrupted for some reason.
3. Kernel failure situation and treatment
(1) Kernel panic
Panic is the most direct fault location report of the kernel. When panic occurs, the kernel already thinks that fault location has caused the operating system to no longer have the conditions for normal operation. When panic occurs, Linux will turn off all the interrupt and process scheduling functions of CPU, so the system does not respond at this time. If the user starts the graphical interface, there is no information about panic on the screen.
We usually encounter, the machine does not respond, ping does not work, the vast majority of the situation is panic. When Panic occurs, the kernel prints the call stack that leads to the code location of panic directly on console. Traditional users use serial port to connect to the machine to collect print information on console, but serial port is obviously not convenient to use. Now Linux, such as RHEL5,RHEL6, use kdump method to collect panic information. When kdump is configured, the system will use kexec to load and switch to a new kernel (placed in a pre-allocated memory location) when panic, and save all or part of the memory data of the system with disk or network.
After collecting the data of panic with kdump, users can directly view the code path that leads to panic with crash tool.
Panic is generally very intuitive, and the stack information of panic can directly reflect the causes of bug, such as MCE failure, NMI failure, data structure allocation failure and so on. But sometimes panic is because the kernel actively discovered key data structure inconsistencies. It is not clear when and what code caused this inconsistency, and it may require many tests to capture it with tools like SystemTap.
(2) deadlock caused by kernel execution path in multiprocessor environment
Kernel deadlocks are different from panic. When deadlocks occur, the kernel does not actively suspend itself. However, when the kernel deadlock occurs, the execution paths of more than two CPU cannot be advanced in the kernel state, are in a state of mutual blocking, and occupy 100% of the CPU (the spin-lock used), directly or indirectly causing the processes on all CPU to be unscheduled. There are two situations of kernel deadlock:
-A deadlock involving an interrupt context. In this case, at least one interrupt on CPU is shielded. The system may not be able to respond to ping requests. Because there is a CPU that can no longer respond to interrupts, the local APIC timing interrupt on it can not work, so it can be detected by NMI Watchdog method (check the counter variables maintained by local APIC handler). NMI Watchdog can call panic () in its handler, and users can use kdump to collect memory information, so as to analyze the call stack on each deadlock CPU and find out the logical cause of the deadlock.
-A deadlock that does not involve an interrupt context. The deadlock in this case, the interrupts on each CPU are normal, the system can respond to the ping request, and the NMI Watchdog can not be triggered. In the kernel before 2.6.16, there was no good way to deal with this situation. In the RHEL5, RHEL6 kernel, a watchdog kernel thread is provided on each CPU. When the deadlock occurs, the watchdog kernel thread on the deadlock CPU cannot be scheduled (even if it is a * priority real-time process), it cannot update the corresponding counter variable. The NMI Watchdog interrupt of each CPU will periodically check the counter corresponding to its CPU and find that if there is no updated, the user can use kdump to collect memory information. Analyze the call stack on each deadlock CPU and find out the logical cause of the deadlock.
(3) oops or warning of the kernel
Oops is similar to warning and panic in that they both actively report exceptions due to inconsistencies found by the kernel. But the problem caused by oops and warning is much less serious than panic, so that the kernel does not need to hang the system when dealing with the problem. To generate oops and warning, the kernel usually records a considerable amount of information in dmesg, especially oops, at least printing the call trace where the fault occurs. Oops can also be converted to panic/kdump for offline-debugging, as long as the panic_on_oops variable under / proc/sys/kernel is set to 1.
There are many direct causes of oops and warning, such as the segment fault in the kernel or the incorrect countervalue of a data structure found by the kernel, and there are deeper reasons for the change of segment fault and countervalue, which can not be seen from the information of kernel dmesg. The solution to this problem is to use SystemTap for probe, such as finding that the value of a counter is incorrect. Use SystemTap to make a probe to record all code's access to the counter, and then analyze it.
Locating oops and warning is much more difficult than locating memory access failures of applications, because it is not possible to track the allocation and usage of data structures in the kernel like using valgrind to trace applications.
2. Other (hardware-related) failures
Automatic machine restart is a common fault situation, which is generally caused by hardware such as physical memory failure. Software failure will only lead to deadlock or panic. There is almost no code in the kernel to reboot the machine when a problem is found. There is a parameter "panic" in the / proc/sys/kernel directory. If its value is set to non-0, the kernel will restart the machine a few seconds after the panic occurs. Now high-end PC servers are trying to use software to deal with physical memory failures, such as MCA's "HWPoison" method will isolate the faulty physical page, Kill can drop the process where the fault page is located, and RHEL6 now supports "HWPoison". For those machines that do not have MCA capability, when there is a physical memory failure, there is no MCE exception, which is directly caused by the hardware mechanism reboot machine.
4. Introduction of Debugging technology on RHEL6
(1) Kdump fault location collection and crash analysis
Kdump is used to collect system memory information in the case of kernel panic, and users can also use the'c 'key of sysrq to trigger in the case of online. Kdump uses a pollution-free kernel to perform dump work, so it is more reliable than previous diskdump and lkcd methods. With kdump, users can choose to dump the data to their local site or network, or filter the memory information to be collected by defining the parameters of makedumpfile, which has reduced the downtime required by kdump.
Crash is a tool for analyzing the information of kdump. It is actually a wrapper of gdb. When using crash, * * installs the kernel-debuginfo package, which parses the symbolic information of kernel data collected by kdump. The ability to locate problems with crash depends entirely on the user's ability to understand and analyze the kernel code.
Refer to "# > man kdump.conf", "# > man crash", "# > man makedumpfile" to learn how to use kdump and crash. Visit http://ftp.redhat.com to download debuginfo files
(2) use systemTap to locate bug
Systemtap belongs to the positioning tool of probe class, which can probe the specified location of the kernel or user code. When executing to the specified location or accessing the data of the specified location, the user-defined probe function executes automatically, and can print out the call stack, parameter value, variable value and other information of that location. The location where systemtap chooses to probe is very flexible, which is the powerful function of systemtap. The probe points of Systemtap can include the following aspects:
-entry or exit points of all system calls in the kernel, all functions in the kernel and modules
-Custom timer probe point
-any specified code or data access location in the kernel
-any defined code or data access location in a specific user process
-A number of probe points preset by various functional subsystems, such as tcp,udp,nfs,signal, where many detection points are preset
The script of systemTap is written in stap script language. The script code calls API provided by stap for statistics, printing data and other work. About the API function provided by stap language, refer to "# > man stapfuncs". For more information on the function and use of systemTap, please refer to "# > man stap", "# > man stapprobes"
(3) ftrace
Ftrace is an event tracking mechanism implemented by tracepoints infrastructure in the linux kernel. Its function is to give a clear description of the activities executed by the system or process in a certain period of time, such as function call path, process switching flow and so on. Ftrace can be used to observe the latency of various parts of the system for real-time application optimization, and it can also help to locate faults by recording kernel activities over a period of time. You can trace the function calls of a process at one end by using the following methods
# > echo "function" > / sys/kernel/debug/tracing/current_tracer # > echo "xxx" > / sys/kernel/debug/tracing/set_ftrace_pid # > echo 1 > / sys/kernel/debug/tracing/tracing_enabled
In addition to tracing function calls, ftrace can also tracing system process switching, wake-up, block device access, kernel data structure allocation and other activities. Note that tracing and profile are different. Tracing records all activities over a period of time, not statistics. Users can set the size of the buffer through the buffer_size_kb under / sys/kernel/debug/tracing to record data for a longer time.
For the specific use of ftrace, please refer to the contents under the kernel source code Documenation/trace.
(4) oprofile and perf
Both oprofile and perf are tools for profile (sampling, statistics) of the system, and they are mainly used to solve the performance problems of the system and applications. Perf is more powerful and comprehensive, while perf's user space tools and kernel source code are maintained and released together, so that users can enjoy the new features of the perf kernel in time. Perf exists only in RHEL6, but there is no Perf in RHEL5. Both Oprofile and perf use hardware counters in modern CPU for statistical work, but perf can also use "software counter" and "trace points" defined in the kernel, so it can do more work. The sampling of Oprofile is carried out by using NMI interrupts of CPU, while perf can make use of both NMI interrupts and periodic interrupts provided by hardware counters. Users can easily use perf to oprofile the execution time distribution of a process or system, such as
# > perf top-f 1000-p
You can also use the "software counter" defined by the system and the "trace points" of each subsystem to analyze the subsystem, such as
# > perf stat-a-e kmem:mm_page_alloc-e kmem:mm_page_free_direct-e kmem:mm_pagevec_free sleep 6
Can count the activity of the kmem subsystem in 6 seconds (this is actually achieved by using the tracepoints provided by ftrace)
I don't think it is necessary for users to use oprofile with perf
5. An example of kernel fault location with kdump tool
A) deploy Kdump
The steps to deploy kdump to collect fault information are as follows:
(1) set the relevant kernel startup parameters
Add the following to / boot/grub/menu.lst
Crashkernel=128M@16M nmi_watchdog=1
The crashkernel parameter is used to reserve memory for the kernel of kdump; nmi_watchdog=1 is used to activate NMI interrupts, and we need to deploy NMI watchdog to ensure that panic is triggered without determining whether the fault has turned off the interrupt. Restart the system to ensure that the settings are in effect
(2) set the relevant sysctl kernel parameters
Add a line to / etc/sysctl.conf
Kernel.softlookup_panic = 1
This setting ensures that panic is called when softlock occurs, which triggers the kdump behavior to execute # > sysctl-p to ensure that the setting takes effect
(3) configure / etc/kdump.conf
Add the following lines to / etc/kdump.conf
Ext3 / dev/sdb1 core-collector makedumpfile-c-message-level 7-d 31-I / mnt/vmcoreinfo path / var/crash default reboot
Where / dev/sdb1 is the file system used to place the dumpfile, and the dumpfile file is placed under / var/crash. Create the / var/crash directory under the / dev/sdb1 partition beforehand. "- d 31" specifies the filtering level for dump content, which is important when dump partitions cannot hold all memory contents or when users do not want dumping to interrupt business for too long. The vmcoreinfo file is placed in the / directory of the / dev/sdb1 partition and needs to be generated using the following command:
# > makedumpfile-g / / vmcoreinfo-x / usr/lib/debug/lib/modules/2.6.18-128.el5.x86_64/vmlinux
The "vmlinux" file is provided by the kernel-debuginfo package. Before running makedumpfile, you need to install the corresponding kernel kernel-debuginfo and kernel-debuginfo-common packages, which need to be downloaded from http://ftp.redhat.com. "default reboot" is used to tell kdump to restart the system after collecting dump information.
(4) activate kdump
Run the # > service kdump start command and you will see that if completed successfully, an initrd-2.6.18-128.el5.x86_64kdump.img file will be generated in the / boot/ directory, which is the initrd file of the kernel loaded by kdump. The work of collecting dump information is done in the startup environment of this initrd. Looking at the code of the / etc/init.d/kdump script, you can see that the mkdumprd command is called to create an initrd file for dump
1. Test the effectiveness of Kdump deployment
In order to test the effectiveness of kdump deployment, I wrote the following kernel module, which can be loaded through insmod to generate a kernel thread that occupies 100% of CPU in about 10 seconds and triggers kdump in about 20 seconds. After the system restarts, check the contents of the / oracle partition / var/crash directory to confirm whether the vmcore file is generated.
Zqfthread.c # include # include MODULE_AUTHOR ("frzhang@redhat.com"); MODULE_DESCRIPTION ("A module to test...."); MODULE_LICENSE ("GPL"); static struct task_struct * zqf_thread; static int zqfd_thread (void * data); static int zqfd_thread (void * data) {int item0; while (! kthread_should_stop ()) {ionization; if (I)
< 10 ) { msleep_interruptible(1000); printk("%d seconds\n", i); } if ( i == 1000 ) // Running in the kernel i = 11 ; } return 0; } static int __init zqfinit(void) { struct task_struct *p; p = kthread_create(zqfd_thread, NULL,"%s","zqfd"); if ( p ) { zqf_thread = p; wake_up_process(zqf_thread); // actually start it up return(0); } return(-1); } static void __exit zqffini(void) { kthread_stop(zqf_thread); } module_init(zqfinit); module_exit(zqffini) Makefile obj-m += zqfthread.o Making #>Make-C / usr/src/kernels/2.6.32-71.el6.x86_64/ M = `pwd` modules
2. Analyze vmcore files with crash tools.
The command line format for parsing vmcore with the crash command is as follows. After opening vmcore with crash, we mainly print out the call trace of the execution path of the problem with dmesg and bt commands, disassemble the code with dis, and finally confirm the position in the C source code corresponding to call trace, and then carry on the logic analysis.
# > crash / usr/lib/debug/lib/modules/2.6.18-128.el5.x86_64/vmlinux / boot/System.map-2.6.18-128.el5.x86_64. / vmcore
6. Use kprobe to observe the execution examples of kernel functions
Kprobe is the implementation of SystemTap's function to probing kernel functions in the kernel. Because the kernel provides a formal API to use kprobe, it may be more convenient for many kernel programmers to use kprobe directly than SystemTap. Three types of kprobe handling functions are provided in the kernel, namely jprobe, kprobe, and kretprobe. The following code uses these three probe to observe the return result of the call to ip_route_input () during the execution of the arp_process function of TCP/IP. This code also shows how to share parameters between Entry handler and Ret handler of the same function probe. The code is as follows:
Arp_probe.c / * * arp_probe.c, by Qianfeng Zhang (frzhang@redhat.com) * / # include # include MODULE_AUTHOR ("frzhang@redhat.com"); MODULE_DESCRIPTION ("A module to track the call results of ip_route_input () inside arp_process using jprobe and kretprobe"); MODULE_LICENSE ("GPL") Static int j_arp_process (struct sk_buff * skb) {struct net_device * dev = skb- > dev; struct in_device * in_dev; int no_addr, rpf; in_dev = in_dev_get (dev); no_addr = (in_dev- > ifa_list = = NULL); rpf = IN_DEV_RPFILTER (in_dev); in_dev_put (in_dev) Printk ("\ narp_process () is called with interface device% s, in_dev (no_addr=%d,rpf=%d)\ n", dev- > name, no_addr, rpf); jprobe_return (); return (0);} Static int j_fib_validate_source (_ _ be32 src, _ _ be32 dst, U8 tos, int oif, struct net_device * dev, _ _ be32 * spec_dst, U32 * itag, U32 mark) {printk ("fib_validate_source () is called with dst=0x%x, oif=%d\ n", dst, oif); jprobe_return (); return (0);} Static struct jprobe my_jp1 = {.entry = j_arp_process, .kp.symbol _ name = "arp_process"}; static struct jprobe my_jp2 = {.entry = j_fib_validate_source, .kp.symbol _ name = "fib_validate_source"}; static int entry_handler (struct kretprobe_instance * ri, struct pt_regs * regs) {printk ("Calling:% s ()\ n", ri- > rp- > kp.symbol_name); return (0);} Static int return_handler (struct kretprobe_instance * ri, struct pt_regs * regs) {int eax; eax = regs- > ax & 0xffff; printk ("Returning:% s () with a return value: 0x%lx (64bit) 0x%x (32bit)\ n", ri- > rp- > kp.symbol_name, regs- > ax, eax); return (0);}; static int fib_lookup_entry_handler (struct kretprobe_instance * ri, struct pt_regs * regs) {struct fib_result * resp Resp = (struct fib_result *) regs- > dx; printk ("Calling:% s ()\ n", ri- > rp- > kp.symbol_name); * ((struct fib_result * *) ri- > data) = resp; return (0);}; static int fib_lookup_return_handler (struct kretprobe_instance * ri, struct pt_regs * regs) {struct fib_result * resp; int eax; eax = regs- > ax & 0xffff Resp = * ((struct fib_result * *) ri- > data); printk ("Returning: fib_lookup () with a return value: 0x%lx (64bit) 0x%x (32bit), result- > type:% d\ n", regs- > ax, eax, resp- > type); return (0);} static struct kretprobe my_rp1 = {.handler = return_handler, .entry _ handler = entry_handler, .kp.symbol _ name = "ip_route_input_slow"} Static struct kretprobe my_rp2 = {.handler = return_handler, .entry _ handler = entry_handler, .kp.symbol _ name = "fib_validate_source"}; static struct kretprobe my_rp3 = {.handler = fib_lookup_return_handler, .entry _ handler = fib_lookup_entry_handler, .kp.symbol _ name = "fib_lookup", .data _ size = sizeof (struct fib_result *)}; static int _ init init_myprobe (void) {int ret Printk ("RTN_UNICAST is% d\ n", RTN_UNICAST); if ((ret = register_jprobe (& my_jp1))
< 0) { printk("register_jprobe %s failed, returned %d\n", my_jp1.kp.symbol_name, ret); return(-1); } if ( (ret = register_jprobe(&my_jp2)) < 0) { printk("register_jprobe %s failed, returned %d\n", my_jp2.kp.symbol_name, ret); return(-1); } if ( (ret = register_kretprobe(&my_rp1)) < 0 ) { printk("register_kretprobe %s failed, returned %d\n", my_rp1.kp.symbol_name, ret); unregister_jprobe(&my_jp1); unregister_jprobe(&my_jp2); return(-1); } if ( (ret = register_kretprobe(&my_rp2)) < 0 ) { printk("register_kretprobe %s failed, returned %d\n", my_rp2.kp.symbol_name, ret); unregister_jprobe(&my_jp1); unregister_jprobe(&my_jp2); unregister_kretprobe(&my_rp1); return(-1); } if ( (ret = register_kretprobe(&my_rp3)) < 0 ) { printk("register_kretprobe %s failed, returned %d\n", my_rp3.kp.symbol_name, ret); unregister_jprobe(&my_jp1); unregister_jprobe(&my_jp2); unregister_kretprobe(&my_rp1); unregister_kretprobe(&my_rp2); return(-1); } return 0; } static void __exit rel_myprobe(void) { unregister_jprobe(&my_jp1); unregister_jprobe(&my_jp2); unregister_kretprobe(&my_rp1); unregister_kretprobe(&my_rp2); unregister_kretprobe(&my_rp3); } module_init(init_myprobe); module_exit(rel_myprobe); Makefile obj-m += arp_probe.o Making #>Make-C / usr/src/kernels/2.6.32-71.el6.x86_64/ M = `pwd` modules the above is how to understand Linux fault location technology. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.