In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
1. Disk cache (Disk Cache)
In the operation of bits and pieces, the disk cache technique is used to improve the speed of the disk, and the visit to cache replication is more efficient than the original data visit. For example, instructions for running procedures are stored both on disk and in physical memory, and are copied to the secondary and primary caches of CPU.
However, the disk cache technique differs from the usual small capacity high-speed memory between CPU and memory, but refers to the application of the storage space in memory to temporarily store the information in a series of blocks read from the disk. Therefore, the disk cache is logically a disk, but physically it is a block that resides in memory.
Cache memory can be divided into two ways: one is to open up a separate storage space in memory as magnetic cache with fixed size, and the other is to use unapplied memory space as a buffer pool. For paging and disk I / O sharing.
two。 Buffer (Buffer)
In the equipment management sub-piecemeal, the main targets of introducing a buffer are:
Tense the contradiction of speed mismatch between CPU and Ihamo equipment.
Increase the infix frequency of CPU and relax the restrictions on the response time of CPU suffixes.
Deal with the mismatch of the size of the underlying data unit (that is, data granularity).
Improve parallelism between CPU and Ihamo equipment.
The ways to accomplish this are as follows:
Hardware buffers are used, but because the cost is too high, except for some sticking points, hardware buffers are not commonly used.
Use the buffer (located in the memory area).
According to the number of buffers set in bits and pieces, buffering techniques can be divided into:
1) single buffer
Set a buffer between the equipment and the disposal machine. When exchanging data between the equipment and the processor, the communicated data is first written into the buffer, and then the equipment or processor that needs the data takes the data from the buffer.
As shown in figure 5-5, when the block equipment is outputted, it is assumed that the time to output a piece of data from the disk to the buffer is T, the time for the operation to transfer the data in the buffer to the user area is M, and the time for CPU to deal with this piece of data is C. Because T and C can be parallel, when T > C, the processing time for each piece of data is M + T, otherwise, it is M-C, so the processing time for each piece of data can be expressed as Max (C, T) + M.
Figure 5-5 single buffer task representation
2) double buffering
According to the characteristics of single buffering, CPU is in the form of leisure during the transmission time M, so double buffering is introduced. The output data of the Icano equipment is first loaded into the buffer zone 1, and the buffer zone 2 begins to be filled after the buffer 1 is filled. At the same time, the processor can take the data out of the buffer zone 1 and put it into the user process for disposal. When the data in the buffer zone 1 is disposed of, if the buffer zone 2 has been filled, the processor pulls the data out of the buffer zone 2 and puts it into the user process for disposal, while the Icano equipment can fill the buffer zone 1. The double buffering mechanism improves the level of parallel operation of the processor and the output equipment.
As shown in figure 5-6, the time it takes to dispose of a piece of data can be roughly thought of as MAC (C, T). If CT, the CPU does not have to wait for the equipment output. With regard to character equipment, if the line output method is used, the double buffer can be used to transport the user after outputting the first line, while the CPU fulfills the edict in the first line, the user can continuously output the next line of data to the second buffer. In the case of a single buffer, it is necessary to wait for one row of data to be extracted before outputting the next row of data.
Figure 5-6 double buffer task representation
If the communication between the two machines is only equipped with a single buffer, as shown in figure 5-7 (a). Then, they can only complete two-way data transmission at any one time. For example, it is only allowed to transfer data from machine A to machine B, or from machine B to machine A, without allowing one party to send data to the other party at the same time. In order to complete two-way data transmission, two buffers must be set up in both machines, one for send and the other for admission, as shown in figure 5-7 (b).
Figure 5-7 buffer setting during dual-computer communication
3) reincarnation buffer
Includes a plurality of buffers of equal size, each with a link pointer to the next buffer, an initial buffer pointer to the first buffer, and multiple buffers to form a ring.
When cycle buffering is used for output / input, you also need two pointers, in and out. For output, first of all, from the equipment receiving data to the buffer, the in pointer points to the first empty buffer where the data can be output; when the data is required during the operation process, a buffer full of data is taken from the recurrent buffer, and the data is extracted from the buffer, and the out pointer points to the first full buffer from which the data can be extracted. Input is just the opposite.
4) buffer pool
It consists of a number of fragmentary common buffers, which can be divided into three queues according to their usage: an empty buffer queue, a buffer queue full of output data (output queue) and a buffer queue full of input data (input queue). There should also be four buffers: a task buffer for receiving output data, a task buffer for extracting output data, a task buffer for receiving input data, and a task buffer for extracting input data, as shown in figure 5-8.
Figure 5-8 Task method for buffer
When the output process requires output data, an empty buffer is removed from the head of the empty buffer queue and used as a buffer for receiving the output task, and then the output data is output, and when it is full, it is hung to the end of the output queue. When the calculation process requires output data, a buffer is obtained from the output queue as the extraction output task buffer, from which the calculation process extracts the data, and then hangs it at the end of the empty buffer queue. When the calculation process requires input data, an empty buffer is obtained from the head of the empty buffer queue as a buffer for receiving input tasks, and when it is full of input data, it is hung at the end of the input queue. When the input is to be entered, a buffer full of input data is obtained from the input queue by the input process, which is used as the buffer of the extraction input task. When the data is extracted, it is hung to the end of the empty buffer queue.
3. Comparison of cache and buffer
Cache is a high-speed memory that can keep copies of data, and visiting the cache is more efficient and faster than visiting the original data. The comparison is shown in Table 5-1.
Table 5-1 comparison of cache relaxation buffers
Cache buffer
On the contrary, most of the data stored between high-speed equipment and low-speed equipment is the replication data of some data on the low-speed equipment, that is, some low-speed equipment on the cache must have the data transferred from the low-speed equipment to the high-speed equipment (perhaps the data transmitted from the high-speed equipment to the low-speed equipment), but there must be a backup of these data in the low-speed equipment (or high-speed equipment). When these data are transferred from the cache area to the speed equipment (or low-speed equipment), the target cache stores the data that the high-speed equipment often visits. If the data to be visited by the high-speed equipment is not in the cache, the high-speed equipment needs to visit the low-speed equipment. The communication between the high-speed equipment and the low-speed equipment will go through the cache area, and the high-speed equipment will never visit the low-speed equipment directly.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.