In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
LRU Chains (or LRU lists) has their related algorithms modified many times in the past. Although the algorithm has been modified, the function of LRU chain remains the same: to help frequently accessed buffer built into cache and to help server processes quickly find replaceable buffers. Any time a single list tries to accomplish these two tasks, there may be some compromises. LRU chain is no exception, as you will find, Oracle's current LRU algorithm implements very well, supporting a buffer caches size of more than 100G to meet the requirements of high transaction processing in telecommunications and government systems.
In Oracle 6, only a single LRU chain is protected by a single LRU chain latch. In large-scale OLTP systems, DBA will compete with LRU chainlatch. But starting with Oracle 7, Oracle alleviates this problem by splitting a single LRU chain into multiple small LRU chains, each with an associated LRU chain latch. Each cache buffer is stored in the CBC structure and in a LRU chain or a write list (also known as a dirty list). Buffers will not be stored in both a write list and a LRU list. LRU chains is much longer than CBCs.
It is not a problem for dirty buffers to be stored in a LRU chain. In fact, performance will be affected if dirty buffers cannot be stored on a LRU chain. One of the goals of LRU chains is to keep the frequently accessed buffers in the cache, and many dirty buffers will also be accessed frequently. During a database checkpoint, each dirty buffer is written to disk and becomes free buffer again.
The implicit parameter _ db_block_lru_latches shows the number of LRU chains being used by the instance. Like CBCs, each LRU chain latch controls the serialization of a set of LRU chains.
SQL > select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ
2 from x$ksppi x, x$ksppcv y
3 where x.inst_id=USERENV ('Instance')
4 and y.inst_id=USERENV ('Instance')
5 and x.indx=y.indx
6 and x.ksppinm like'% & par%'
Enter value for par: _ db_block_lru_latches
Old 6: and x.ksppinm like'% & par%'
New 6: and x.ksppinm like'% _ db_block_lru_latches%'
NAME VALUE DESCRIB
-
_ db_block_lru_latches 640 number of lru latches
LRU Chain changes over time
The current LRU chain algorithm is called the touch-count algorithm, which uses the frequency calculation scheme to set a number on each buffer header. But it took Oracle years to implement this algorithm. Understanding the evolution of Oracle's LRU algorithm provides a better understanding of how LRU chains works, what its disadvantages are, and how to ensure that they are executed on demand.
When there are performance problems with LRU chains, a lot of LRU chain latch competition will arise. From the perspective of the Oracle algorithm, latch problems usually cause the server process to hold a lRU chain latch for too long when searching for a free buffer. There are many interrelated reasons, and so is the solution.
Standard LRU Algorithm (standard LRU algorithm)
Regardless of Oracle's LRU algorithm, each Oracle LRU chain has a least recently used (LRU) side and a recently frequently used (MRU) side. Generally speaking, frequently accessed buffer header will be stored near the MRU side, and buffer that is not frequently accessed will be stored near the LRU side.
The standard LRU algorithm is very simple. When a buffer is placed in the cache or accessed (query or DML operation), the buffer is stored on the MRU side of the session-related LRU chain (each session is associated with a LRU chain). The idea is that a frequently accessed buffer will be repeated touched and moved to the MRU side of the LRU chain repeatedly. Moving buffer to the MRU side of LRU chain is often called buffer promotion. If a buffer is not accessed frequently, other buffer will be promoted or inserted into the LRU chain, and the infrequently accessed buffer will be moved to the LRU side of the LRU chain.
There may be a server process lurking near the LRU side of each LRU chain to find an available buffer that is not frequently accessed so that it can be replaced by a block that has just been read from disk. Assuming that the LRU chain is as long as eight buffer header, the full table scan will scan eight blocks, and each block will be read into the Oracle cache and the buffer headers will be put into the LRU chain. When the standard LRU algorithm is used, there is only one LRU chain, so the entire LRU chain will be replaced by the blocks accessed by the full table scan. Over time, the inclusion of frequently accessed buffer has been replaced. Users will certainly notice the performance change, and the IO subsystem will also be hit. As the database size continued to grow, Oracle obviously had to improve, so the LRU algorithm was modified.
Modified LRU Algorithm (modified LRU algorithm)
Oracle's famous LRU algorithm modification is in Oracle 6. It's a major achievement and Oracle developers should really be proud of their advanced buffer cache algorithms. After that, it does solve the key problem of the standard LRU algorithm.
The only difference between the modified LRU algorithm and the standard LRU algorithm is that it creates a window for several buffer on the LRU side of the LRU chain (to store the frequently accessed buffers). The size of this window is only a few buffers (for example, 4) and can be modified with the implicit parameter _ small_table_threshold. This ensures that no matter how large the table is, a full table scan will have no impact on cache.
Oracle's modified LRU algorithm creates a window for some buffer headers that passes through when all full table scan (FTS) buffer headers is read into buffer cache. This ensures that frequently accessed buffers on the MRU side of the LRU chain will not be replaced.
Like all other algorithms, the modified LRU algorithm has limitations, but these limitations have not caused problems over the years. However, once customers start using Oracle to develop large data warehouse applications, two significant problems arise:
. Large data warehouses have a large number of indexes, and when a large number of indexes use range scanning, thousands of index leaf blocks must be read into the cache. This problem lasts until Oracle 8i, and if the index leaf block is not in the buffer cache, Oracle will generate a single block IO request (db file sequential read) to put the data block into the buffer cache. Surprisingly, because this is not a multi-block IO request, the index buffer is inserted into the MRU side of the LRU chain, which destroys the well-developed cache and now completely stores the index leaf block buffers.
. When data blocks are requested (based on index leaf blocks), they are also requested once from the IO subsystem (db file sequential read), so once again these data blocks are placed on the MRU side of the LRU chain. As the size of the Oracle system increases, Oracle's buffer cache reduces usability.
Oracle's Touch-Count Algorithm
In Oracle 8.1.5, Oracle introduced a completely modified LRU chain algorithm that has completely eliminated all LRU chain latch competition problems. There is no documentation of this modification. The algorithm is found to have changed because of the new implicit parameters _ db_percent_hot_default and _ db_aging_cool_count. When new parameters appear or old parameters are discarded, the algorithm must be modified. Oracle does implement what is commonly known as the counting frequency scheme in the field of computer science.
SQL > select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ 2 from x$ksppi x, x$ksppcv y 3 where x.inst_id=USERENV ('Instance') 4 and y.inst_id=USERENV (' Instance') 5 and x.indx=y.indx 6 and x.ksppinm in ('_ db_percent_hot_default','_db_aging_cool_count') NAME VALUE DESCRIB-- _ db_percent_hot_default 50 Percent of default buffer pool considered hot_db_aging_cool_count 1 Touch count set when buffer cooled
As you might expect, the general approach is to increment the counter every time you hit buffer header. Buffer headers that is accessed more frequently will have a higher touch count and do visit more frequently, so buffer will be retained in buffer cache. The Oracle's touch-count algorithm determines whether the buffer header is accessed frequently or not based on the number of times the buffer header is touched. Note that the concept of FTS (full Table scan) window is no longer needed and has been removed. There are three key points in touch-count algorithm: midpoint-insertion,touch count incrementation and buffer promotion
Midpoint Insertion
The most fundamental deviation from the modified LRU algorithm is midpoint insertion. Each LRU chain is divided into hot area and cold area. When a buffer is read from disk and a free buffer is found, the buffer and buffer header will replace the contents of the previous buffer and buffer header and then the buffer header will be moved to the midpoint of the LRU chain. There is no difference between single-block read, multi-block read, fast full index scan or full table scan. The buffer header is not inserted into the MRU side of the LRU chain, but the midpoint of the LRU chain. This ensures that the use of LRU chain is not compromised because a large number of data blocks of a single object are read into buffer cache.
By default, the hot zone and the cold zone are split equally. Midpoint is really in the middle. However, this can be configured with the implicit parameter _ db_percent_hot_default.
When other buffer headers is inserted into the midpoint or is promoted (promoted), the original buffer headers will naturally move from the hot area of the LRU chain to the cold area. After a buffer header is inserted, the only way to stay in the cache for a long time is to promoted it repeatedly.
Because the window scheme is used in the modified LRU algorithm and is no longer used, the implicit parameter _ small_table_threshold is discarded. In Oracle11g, however, it is used again, but for different purposes. Starting with Oracle 11g, the _ small_table_threshold parameter is the threshold at which the server process starts performing direct path reads. Direct path reading can improve performance because data blocks are read directly from disk into the PGA memory of the server process without having to be put into buffer cache. However, this is a more selfish read operation and may actually degrade performance because other server processes cannot benefit from IO operations.
SQL > select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ 2 from x$ksppi x, x$ksppcv y 3 where x.inst_id=USERENV ('Instance') 4 and y.inst_id=USERENV (' Instance') 5 and x.indx=y.indx 6 and x.ksppinm like'% & par%' Enter value for par: _ small_table_thresholdold 6: and x.ksppinm like'% & par%'new 6: and x.ksppinm like'% _ small_table_threshold%'NAME VALUE DESCRIB -_ small_table_threshold 60283 lower threshold level of table size for direct reads
Suppose you are a server process that must query a row of records stored in a particular block of data. Based on this SQL statement and data dictionary, you know the file number and block number of the block. If all you care about is query speed, you want this data block to already be stored in buffer cache. To check whether the data block is stored in buffer cache, you need to get the buffer's buffer cache memory address, which is stored in its buffer header.
In order to find buffer header, you must access the CBC structure. Hash file number and block number, which will point to a hash bucket. Based on this hash bucket, you can find the relevant CBC latch and hold it. After a few spin sessions, you may be able to get latch, so start your serialized CBC search. If the first buffer header is not what you want, and unfortunately there is no second buffer header in this CBC, you know that buffer is not currently in buffer cache.
Release CBC latch and make a call to the operating system, requiring access to the blocks you need. When you are waiting, you will be told db file sequential read to wait for the event. The block is eventually received from the operating system and held in PGA. Because direct path reading is not used, the buffer must be properly inserted into the buffer and updated before you or other server processes can access the buffer cache.
You will need a free buffer to store the data blocks you just read in buffer cache, so you will move to the LRU side of LRU chain. But before you start scanning the LRU chain, you must hold and obtain the relevant LRU chain latch. Then, when dormant, spinning and posting wait for the event latch:cache buffers lru chains to consume CPU, and finally get latch. Starting with the LRU side of LRU chain, you check to see if buffer header is an infrequently accessed free buffer, and the answer is that it is an infrequently accessed buffer. Then you can start the buffer replacement operation now. You immediately pin (fixed) this buffer header. From the buffer header, you can get the data block corresponding to the memory address of the buffer in the buffer cache, replace the free buffer with the block that has just been read and is still in your PGA memory, and perform any changes required by the buffer header. You maintain the LRU chain and move the buffer header to LRU chain's midpoint, release the LRU chian latch, and unpin the buffer header. Now any server or background process, including you can access the buffer, this will be done in an instant.
Touch Count Incrementation
The concept is that every time a buffer header is touch, its touch count will increase. In fact, this is not the case. By default, the touch count of a buffer header is only incremented every 3 seconds. This can be used to ensure that buffer activity lasts longer than a few seconds before it is counted as being frequently accessed.
When a buffer is inserted into the buffer cache, its touch count is set to 0. 0. However, if the buffer is repeatedly touch in a short period of time, then the touch will not be counted.
Oracle also allows touch count to be left out. There will be no latch called (this is the best way to eliminate latch competition), and Oracle will not pin buffer header. Without serialization control, two server processes can increment the touch count of the buffer header's to the same value as the update.
Suppose the server process S100 gets a touch count of 13 for buffer header at T0 and starts incrementing to 14. But server process S200 now asks for the touch count of this buffer header at time T1, and because server process S100 has not yet completed the increment operation of touch count, the touch count of buffer header is still displayed as 13. The server process S200 now starts incrementing touch count from 13 to 14. At time T2, server process S100 modifies the touch count of buffer header to 14, and at time T3, server process S200 also changes the touch count of buffer header to 14. Is this touch count increment omitted? No structure is damaged, and the touch count has indeed been incremented, but not twice. If a buffer is indeed accessed frequently, it will be touch again. What is saved through this fuzzy implementation is the consumption of CPU and the amount of kernel code running.
Buffer Promotion
It is not said that when a buffer is touch, it will be promoted to the MRU side of the LRU chain. This is because buffer header's touching and buffer header's promotion are now two separate operations. When a buffer is considered for promotion, it is also considered to replace it. Both the server process and the database write process can promote buffer header, but only one server process will replace the buffer and the buffer header associated with it as a result of a physical read of the data block. It makes no sense for the database write process to perform a replacement because it has no content to replace.
After a server process reads a block from disk, it must find an infrequently accessed free buffer to store the block that has just been read. The server process gets the appropriate LRU latch and then starts scanning the buffer headers from the LRU side of the LRU chain. Remember that buffer headers is stored in LRU chain, not buffers. If the server process encounters a free buffer header, it checks whether it is accessed frequently. If accessed frequently, the server process will promote the buffer header and then continue scanning. If the free buffer header is not accessed frequently, the server process replaces the buffer with blocks read from disk, updates the buffer header, and moves the buffer header to LRU chain's midpoint. Note that there is no need to update the CBC structure here, because the buffer is not moved, only the buffer header on the LRU chain is moved. If the server process encounters a dirty buffer header, check to see if it is a frequently accessed dirty buffer header. If the dirty buffer header is accessed frequently, it will promote the buffer header and continue scanning. If the dirty buffer header is not accessed frequently, the server process will move the buffer header to the write list. If the server process encounters a buffer header residing by pin, it will continue to scan. Buffer, where pin lives, is banned.
Promotion operations are interrupted as soon as the minimum value of 2 (_ db_aging_hot_criteria) is reached. So when a server process or database write process is asking, "what is the number of touch count per buffer?" It actually asks, "is the touch count of buffer greater than or equal to _ db_aging_hot_criteria?" If a buffer is touch every few seconds, then it should be retained in the cache. If not, it will be quickly replaced.
When a frequently accessed buffer is promoted, its life cycle becomes more difficult. As part of the promotion operation, touch count is set to 0 (_ db_aging_stay_count). This occurs unless buffer is a segment header or a consistent read (CR) buffer.
SQL > select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ 2 from x$ksppi x, x$ksppcv y 3 where x.inst_id=USERENV ('Instance') 4 and y.inst_id=USERENV (' Instance') 5 and x.indx=y.indx 6 and x.ksppinm in ('_ db_aging_stay_count') NAME VALUE DESCRIB- _ db_ Aging_stay_count 0 Touch count set when buffer moved to head of replacement list
The database write process may also be buffer headers where promote is accessed frequently. When a database write process is dormant, it will be awakened every 3 seconds. Each database write process has its own write list (dirty list) and it is also associated with one or more LRU chain. When a LRU chain database write process is awakened, it will check its write list to see if the write list is long enough to perform an IO write operation. If the database write process decides to build a write list, it will scan its LRU chain for dirty buffer that is not accessed frequently. Much like the server process looking for free buffer, the database writer process will also get the relevant LRU chain lath, starting with the LRU side of the LRU chain and checking that the buffer header is dirty and is not accessed frequently. If an infrequently accessed dirty buffer is found, the database write process will move the buffer header from the LRU chain to its write list (remember, the buffer header is still stored in the CBC structure, so it can be found by other processes). If the write list is still not long enough to perform an IO write, the database write process will continue to scan its LRU chain for more infrequently accessed dirty buffer headers.
Hot Region to Cold Region Movement
The life cycle of a buffer header begins with midpoint (right in the middle) in LRU chain. Because other buffer headers will be replaced and inserted in the middle, as the buffers is promoted, a buffer header will naturally migrate to the LRU side of the LRU chain. The only way to promote a buffer header is to identify the buffer header as being accessed frequently. Another significant event that occurs when a buffer crosses the middle (midpoint) is moving from hot region to cold region.
When a buffer enters the cold region, its touch count is reset to the default value of 1 (_ db_aging_cool_count). This has the effect of cooling the hot buffer, which any buffer that wants to remain in the cache does not want to happen. Increasing the value of this parameter will artificially increase the buffer value, thus increasing the likelihood of buffer moving. So by default, when a buffer header goes to cold region, it must be touched at least once to match the condition of the promotion operation (_ db_aging_hot_criteria).
SQL > select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ 2 from x$ksppi x, x$ksppcv y 3 where x.inst_id=USERENV ('Instance') 4 and y.inst_id=USERENV (' Instance') 5 and x.indx=y.indx 6 and x.ksppinm in ('_ db_aging_cool_count') NAME VALUE DESCRIB--_ db_aging_cool_count 1 Touch count set when buffer cooledSQL > select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ 2 from x$ksppi x X$ksppcv y 3 where x.inst_id=USERENV ('Instance') 4 and y.inst_id=USERENV (' Instance') 5 and x.indx=y.indx 6 and x.ksppinm in ('_ db_aging_hot_criteria') NAME VALUE DESCRIB-- _ db_ Aging_hot_criteria 2 Touch count which sends a buffer to head of replacement list
Touch Count Changes
You may wonder why Oracle resets touch count when a buffer header is promoted and when it enters the touch count. The key to understanding this is to understand the midpoint. Intermediate point (midpoint) by default each LRU chain is divided equally into hot and cold
Region (_ db_percent_hot_default=50), which can be set to any number between 0 and 100. If the LRU chain becomes a 100% hot region, the only touch count reset will occur when the buffer is promoted. When Oracle releases the ability to create any number of buffer pools, the ability to maintain an intermediate point (midpoint) in each pool allows for highly optimized and specific LRU activities. Although the dual setting may seem stupid at first, it does have a real purpose and lays the foundation for the future.
SQL > select'00:'| | count (*) x from x$bh where tch=0 2 union 3 select'01:'| | count (*) x from x$bh where tch=1 4 union 5 select'02:'| | count (*) x from x$bh where tch=2 6 union 7 select'03:'| | count (*) x from x$bh where tch=3 8 union 9 select'04:'| count (*) x from x$bh where tch=4 10 union 11 select'05:'| | count (*) x from x$bh where tch=5 12 union 13 select'06:'| count (*) x From x$bh where tch=6 14 union 15 select'07:'| | count (*) x from x$bh where tch=7 16 union 17 select'08:'| | count (*) x from x$bh where tch=8 18 union 19 select'09:'| | count (*) x from x$bh where tch=9 20 union 21 select'10:'| count (*) x from x$bh where tch=10 22 union 23 select'11:'| | count (*) x from x$bh where tch=11 24 union 25 select'12:'| count (*) x from x$bh where tch=12 26 union 27 select'13: '| | count (*) x from x$bh where tch=13 28 union 29 select' 14:'| | count (*) x from x$bh where tch=14 30 union 31 select'15:'| | count (*) x from x$bh where tch=15 32 union 33 select'16:'| | count (*) x from x$bh where tch=16 34 / Xmur00: 187912501: 69746302: 25448203 : 22732404: 16141005: 14165106: 9169907: 7059908: 5560509: 2555110: 1718111: 2983312: 1997813: 1332414: 2900615: 999816: 964917 rows selected
The reset of touch count has an important impact. First of all, this means that touch count will not soar to infinity. The touch count reset also means that the most frequently accessed buffer headers will not need to have the highest touch counts. If you notice that a particular buffer has a lower touch count, you may have captured a frequently accessed buffer, but it may have just been promoted or entered the cold region of the LRU chain. In fact, the buffer headers of the highest touch count will be stored near the LRU end of the LRU chain.
LRU Chain Contention Identification and Resolution
Oracle's LRU touch-count algorithm, combined with the default instance parameter settings, uses trivial competition to enable high-performance LRU chain activities. When the touch-count algorithm is under pressure, this is a unique combination of IO and CPU activities.
The LRU chain latches is named cache buffers lru chain. The hash chain latches is named cache buffer chains. The naming is very close and can cause considerable confusion. As long as you remember the name of LRU chain latches, lru will not be confused. In versions prior to Oracle 10g, wait events were simplified to latch free, and in order to determine a specific latch, you needed to use the p2 column in the v$session_wait view to associate with the latch# in the v$latch to query. For Oracle 10g and later, the wait event is identified as latch:cache buffers lru chain.
If you do not need to perform a physical read to read data from disk, then there will be no LRU chain latch contention because there is no need to find free buffer or insert a buffer header into a LRU chain. The database write process looking for dirty buffers that is not frequently accessed will not put pressure on the LRU chain structure and lead to LRU chain latch competition. However, any time a server process reads blocks from disk, it must find a free buffer, which will request LRU chain activity (except for direct path reads). If the IO read area spends 10ms, then you may see db file scattered read competing with db file sequential read waiting events rather than LRU chain latch. But if the IO subsystem returns blocks for less time than 5ms, then the pressure shifts to the CPU subsystem, and LRU chain activity begins to come under pressure.
The possible result of LRU chain latch competition is the problem of obtaining latch, holding latch for a long time or both at the same time. If the CPU of the operating system is limited, it may take a long time to obtain latch because there are not enough CPU cycles. Once the latch is obtained and the LRU chain-related kernel code is run, if the CPU cycle is not available or the free buffers that is not frequently accessed is limited, the LRU chain latch may be held for a long time enough to cause serious competition.
Therefore, first of all, there must be strong physical read activity. Second, the response time of the IO subsystem is very fast, passing most of the wait time from read wait events to LRU chain latch wait events. This competition provides many solutions that can be used in combination:
. Optimize physical IO SQL statements
If there is no physical IO, there will not be a lot of LRU chain latch competition. Therefore, from an application perspective, finding SQL statements whose main activity is to perform physical block reads, that is, physical IO activities. Reduce the physical IO consumption of SQL statements as much as you can. This means performing classic SQL optimizations, including using indexes, and reducing the execution speed of top-level physical IO SQL statements during performance critical periods.
. Increase CPU processing capacity
Like CBC latch competition or any other latch competition, memory management will take less time if more CPU resources are available. This means that the latch hold time and latch acquisition time (spinning and sleeping) will be reduced. Increasing CPU processing power also means looking for creative ways to reduce second CPU consumption during peak competitive times.
. Increase the number of LRU latch
By increasing latches, you can increase the concurrency of LRU, which means increasing the value of the implicit parameter _ db_block_lru_latches. Adding latches may be particularly effective if there are a lot of G's buffer cache.
. Use multiple buffer pools
A creative strategy to reduce the pressure on the main LRU chain is to implement keep and recycle pools. All buffer pools can increase the number of LRUchain latches. They also fail to use touch-count and have similar touch count instance parameters, such as _ db_percent_hot_keep
. Call touch count instance parameters
There are several touch count parameters available. Note, however, that the values of these parameters are small, such as 1 and 2. Because of citation, even if the parameter is changed from 1 to 2, it is a considerable change that can lead to unintended consequences. Only after testing will adjust the touch count parameters as a last resort.
The _ db_percent_hot_default parameter, which defaults to 50. It represents the percentage of buffer headers in hot region. If you want more buffer header to be stored in hot region, you can increase this parameter. Decreasing this parameter will give buffer headers more time to be touched before it encounters a server process or database write process.
The _ db_aging_touch_time parameter, which has a default value of 3, is the only way to add a buffer header touch count (x$bh.tch) time window. Increasing this parameter will reduce the impact of sudden outbreaks of buffer-centric activities, while risking the depreciation of frequent visits to buffer.
The _ db_aging_hot_criteria parameter, whose default value is 2. The touch count threshold of a buffer header must be met or exceeded in order to be promoted. If it is more difficult for a buffer to be promoted, you can increase this parameter value. Then only the real hot buffers will be retained in the cache.
The _ db_aging_stay_count parameter, whose default value is 0. The value after the touch count is reset when a buffer header is promoted. Except for consistent reads and segment header blocks.
The _ db_aging_cool_count parameter, whose default value is 1. The value after the touch count is reset when a buffer header enters cold region from hot region. Decreasing this parameter value will make it more difficult for buffer header to be promoted.
The _ db_aging_freeze_cr parameter, whose default value is false. Buffers that is read consistently is always in the cold state, so they are easy to replace.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.