In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
OK~ WSFC 2012 R2 Annual Feast begins ~ in this article, Lao Wang will use a series of scenarios to string together dynamic arbitration, dynamic witness, vote adjustment, LowerQuorumPriorityNodeID, blocking arbitration and other cluster arbitration techniques to complete one complex scene after another. This article may not be suitable for friends who do not know about WSFC, and for friends who have some preliminary understanding of WSFC cluster arbitration technology and 2012 dynamic arbitration technology. If you do not know WSFC, it is recommended that you first take a look at Lao Wang's blog, or relevant materials from other places. If you already have a preliminary understanding of WSFC arbitration and dynamic arbitration technology, I believe that following the scenario in Lao Wang's article will help you have a deeper understanding of group voting, dynamic arbitration and so on. Needless to say, let's start now and follow Lao Wang to get on the bus. The roller coaster has been driven.
Before we start, we first introduce the environment. In order to prepare for this blog, Lao Wang opened a total of seven virtual machines, mainly to reproduce some real partition scenarios. Before, in 2008R2, we used six servers, two nodes on each side, a domain control + ISCSI, a routing server, but only one side had domain control, and the other side did not have domain control, so when partition occurred There is no way for the party without domain control to go online to grab the cluster, although the effect is the same. eventually, there are partitions at both ends, and the cluster cannot be used. We forced one of them to start. In this environment, Lao Wang specially used seven servers, and there are domain controls at both ends, which can form a scene of the cluster.
Environment introduction
Beijing site
HV01
MGMET:80.0.0.2 GW:80.0.0.254 DNS:80.0.0.1 100.0.0.20
ISCSI:90.0.0.2
CLUS:70.0.0.2
HV02
MGMET:80.0.0.3 GW:80.0.0.254 DNS:80.0.0.1 100.0.0.20
ISCSI:90.0.0.3
CLUS:70.0.0.3
BJDC&ISCSI
Lan:80.0.0.1 GW:80.0.0.254
ISCSI:90.0.0.1
Foshan station
HV03
MGMET:100.0.0.4 GW:100.0.0.254 DNS:100.0.0.20 80.0.0.1
ISCSI:90.0.0.4
CLUS:70.0.0.4
HV04
MGMET:100.0.0.5 GW:100.0.0.254 DNS:100.0.0.20 80.0.0.1
ISCSI:90.0.0.5
CLUS:70.0.0.5
FSDC
Lan:100.0.0.20 GW:100.0.0.254
Router
03Route
Beijing:80.0.0.254
Fuosha:100.0.0.254
First of all, let's take a look at an appetizer. In this scenario, we will simulate a cross-site cluster of four nodes, where the nodes go down one by one when the disk is always online. The following is a multi-site cluster that Lao Wang has set up, in which the cluster name and cluster IP may change in the later figure, because Lao Wang dismantled and built the cluster several times during the experiment. However, other architectures will not change as planned.
You can see that we have run a DTC application in the cluster. The current DTC application binds two IP addresses, and the relationship between them is OR.
There are many things to consider in the design and planning of multi-site cluster, such as storage replication, network detection, encryption processing, AD synchronization, DNS cache and so on, which will affect the failover time of multi-site clusters. Later, we will write a separate blog to talk about it in detail. Here are two places that affect direct client access.
By default, in a multi-site cluster, for example, our DTC application is currently running on the Beijing site, its online address will be 80.0.0.89, when it is transferred to the Foshan site, the online address will be 100.0.0.89, but at this time, the client can access the DTC service, not necessarily
Imagine that Beijing and Foshan sites do AD multi-sites, and their DNS will be synchronized with each other. When DTC is in Beijing, it is 80.0.0.89. After a period of time, it synchronizes to Foshan site. DNS in Foshan also knows that DTC is 80.0.0.89. Then when Beijing Station crashes and transfers to Foshan site, although the DTC application is online at this time, when the client accesses DTC The address of 80.0.0.89 will be returned, but not the available address of 100.0.0.89, which also leads to the delay of downtime.
The reason is caused by DNS's caching mechanism. By default, the host record registered after the cluster application is online will have a TTL of 1200 seconds. That is to say, if the client requests the host record of this VCO, it will not request it again within 1200 seconds, and will use the address in the cache.
2008 era WSFC added the HostRecordTTL attribute for multi-site, so that we can set the TTL,TTL life cycle when VCO records register to DNS can now be shortened. Microsoft proposes to shorten it to 300 seconds, that is, to request a new address for DNS in five minutes.
Open the client of Foshan site and enter ipconfig / display to see the cached VCO record and time to live (TTL)
Another new multi-site attribute added in the 2008 era is RegisterAllProvidersIP. Under normal circumstances, even if you design multiple addresses for our VCO, they default to the relationship of OR, that is, only one cluster IP address is always registered. The current Beijing site is the address of the 80th network segment. When you connect to the Foshan site, CNO will register the address updated to 100network segment for VCO. At the same time, VCO has only one site address online by default. Through the RegisterAllProvidersIP property, we can ask CNO to register all VCO addresses for us, but this requires the cluster application to support address retry
A perfect scenario would be for the cluster application to connect to the address of the 80 network segment by default. When the 80 network segment is not available, automatically retry the connection to the 100 network segment online. Because all addresses are registered to DNS, the downtime caused by DNS cache can be reduced. If the cluster application supports automatic retry, you can use this property. SQL Server 2012 starts to support adding the MultiSubnetFailover=True parameter to the connection string to be used with RegisterAllProvidersIP. When one of the addresses of the connection fails, the other will be automatically connected.
Here we follow Microsoft's suggestion to configure the HostRecordTTL of the DTC application to 300.
The setting method is as follows
# get the name of cluster application resource
Get-ClusterResource | Select Name, ResourceType
# obtain the attributes of cluster application resources
Get-ClusterResource "devtestdtc" | Get-ClusterParameter
# modify the HostRecordTTL attribute and go online again to see that the settings have taken effect.
Get-ClusterResource "devtestdtc" | Set-ClusterParameter HostRecordTTL 300
After the side dish is finished, let's take a look at our main dish. We can see that we are currently a cluster of four nodes, witnessing that the disk can normally participate in the vote.
# check the number of node votes
Get-ClusterNode | ft ID, NodeName, NodeWeight, DynamicWeight, State-AutoSize
# check the number of witness votes
(Get-Cluster). WitnessDynamicWeight
We will often use these two commands.
The current cluster DTC is running normally on HV01
If we power off the HV01 directly, we can see that the cluster application has been automatically transferred to the HV02,HV01 offline, so its vote has been removed. At the same time, since it is now 4 votes, the vote of the witness disk has been automatically removed, and the number of votes has always been kept odd.
Directly cut off the HV02, we completely lost the Beijing site, we can see that now automatically add one vote of the witness disk, now the cluster is still three votes, I directly force the DNS server to update, and then on the client side ipconfig / flushdns, at this time try to connect devtestdtc will return the address of 100network segment, if you do not clear the DNS cache, you can wait for 300s, request automatic update to 100network segment again
DTC is currently running on HV03, and we also power off HV03. Now only HV04 is left in the cluster. You can see that three votes are displayed in the group meeting, but it doesn't really matter, because we have only one node left, and he can contact the witness disk, so he can survive until the end.
The above is our first side dish. How about a simple appetizer? we can see that in the case of witnessing the disk, the cluster node is gradually down, and WSFC2012R2 will always dynamically adjust the cluster vote to ensure that the cluster vote is always odd, that is, one party can always survive, and finally only one node can be left and witness survival.
Next, let's simulate the second scenario, where the management network of the Beijing site and the Foshan site loses contact, that is, the 80 network and 100 network are not connected, so we directly turn off the routing server, in order to prevent one of the parties from contacting to witness the disk winning, we also simulate the loss of contact with the disk.
At this time, we can see that the cluster has detected a witness disk failure and has automatically adjusted the number of votes on three nodes to an odd number.
In addition, we can see that when we disconnect the management network, the cluster can still work normally. Why? because the built-in network topology generator of the cluster will automatically generate and adjust the topology detected in the whole network in real time. The management network can not only access but also do heartbeat detection. Heartbeat network can only do heartbeat detection, so as long as it does not affect the heartbeat detection between cluster nodes, there will be no response.
The cluster will not know about the failure of your management network, because the site management network can be accessed normally in Beijing and Foshan, so the cluster thinks it is good, and there are other cards that can be used for heartbeat detection.
Suppose that all the employees of this company come to Beijing on business from Foshan, that is, all users have to visit the client of the 80th network segment in Beijing, but if the cluster DTC suddenly drifts to Foshan at this time, although it can work normally in Foshan, the users of the Beijing site can not visit there, because we know that the 100th network segment of Foshan site has lost contact with the outside world.
By default, if you do not adjust the cluster application, the DTC application drifts randomly and does not necessarily drift to that node. If you see that node has a lot of memory, it will go there, and once it floats to the server of the Foshan site, it will be miserable.
At this time, we can control it by setting the habit node of DTC to HV01,HV02 first, so that if the current DTC is hosted in Beijing, when it fails over, priority will be given to transferring to HV01 or HV02.
But just setting the preferred owner is not enough, because setting the preferred owner is only useful when there is no partition. When a network partition occurs, HV01 HV02 cannot contact HV03 HV04. At this time, because HV01 does not have votes, and HV03 HV04 has votes, the application will still be transferred to Foshan site to run.
In response to this scenario, we can completely remove the number of votes on the HV03,HV04 site. After doing so, even if a network partition occurs, there is one vote left in the Beijing site and two votes left in the Foshan site, but because we have removed the votes of the two servers in the Foshan site, the Foshan site will try to form a cluster, but it will never succeed because they do not have a legal number of votes.
That is to say, by manually removing the number of votes, the two nodes in Foshan will never be able to form a cluster, or they will visit the nodes in Beijing. Once the nodes in Beijing cannot be started, the cluster application will stop accessing or be forced to start.
# remove cluster voting manually
(Get-ClusterNode-Name hv03) .NodeWeight = 0
(Get-ClusterNode-Name hv04) .NodeWeight = 0
The function of manually controlling the voting of nodes can be obtained by adding a Hotfix in 2008 and 2008R2, and WSFC comes with 2012 and later.
The above is one of the scenarios in which the number of cluster votes is manually adjusted, that is, we know that one party's site can no longer provide external access services, and we need to make the site stop its external service until the network resumes and then re-assign the number of votes.
Lao Wang thinks that manually adjusting the number of votes is a very useful technique. In addition to this scenario, which is not available on known sites, there are also opportunities to show their talents in many other scenarios.
For example, in a scenario where there is a complete manual failure, there are three sites in Beijing, Tianjin and Hebei, and only the nodes at the Beijing site have votes, and the votes in Tianjin and Hebei are manually cancelled, because when the failure occurs, a disaster recovery meeting may be held to discuss that it is more appropriate for which site to continue to undertake the cluster, for example, to discuss that Tianjin site is currently more suitable to undertake the cluster and manually give Tianjin site the number of votes. The node detected that there was a current vote, so the Tianjin site started to form a cluster and continued to provide services.
Or there is another scenario. Suppose there are two sites in Beijing and two in Hebei, each with two nodes, and the current cluster has a total of four nodes. I manually removed the voting of one of the nodes in the Hebei site. Let it never participate in the cluster vote. At this time, if the cluster fails, the three nodes of the Beijing site and the Hebei site can no longer start to provide services. We can forcibly start the node that removes the vote, re-grant it the number of votes, and allow it to continue to provide services, or there may be a downtime at Beijing Railway Station. After compulsory arbitration, all businesses run to a node in the Hebei site, and the machine load is already full. At this time, the node without voting can be re-assigned to vote and bear the load together, so that it can be used as a disaster recovery node.
Next, we assume that the current cluster is a dynamic arbiter, and the cluster witnesses the disk failure first, then the management network fails, and the heartbeat network also fails, resulting in a partition scenario.
For the network partition, we will not only turn off the routing server, but also change the HV03 HV04 heartbeat network card directly to the network.
For witnessing disk failure, we still use direct ISCSI to disable the disk. We can see that after about 30 seconds, the arbitration detected that the witness disk is not online, so it automatically removed its number of votes, and also randomly removed the number of votes of a node. Now the cluster has become three votes.
The quorum disk attempts to hang online at each node in accordance with the failover policy.
All attempts will be displayed as failed, and after a period of time, the attempt will be suspended online, but will never succeed.
We can see that due to the failure of the witness disk, the arbitration has dynamically adjusted the number of votes, and randomly removed the number of votes of one node, and now the cluster is still an odd number of three votes.
If a network partition occurs at this time, the Beijing site and the Foshan site have no way to detect the heartbeat, and there is no network card to be used for the detection, then the Foshan site will win and continue to provide services, while the Beijing site will be closed.
Because the Beijing site was selected to remove one vote, there was only one vote, while the Foshan site had two votes, so the Foshan site could form a cluster. After forming the cluster, the Foshan site automatically went to one vote, and now the Foshan site also has an odd number of votes.
As you can see, the core here is that the cluster chooses to remove the vote from that site, and the site that has been removed from the vote will be closed when voting occurs. By default, the group assembly will constantly adjust the number of votes according to the state changes of each node and the network monitoring situation. Every time it will randomly choose who is the node of the voting station. This is a bit like catching an unlucky guy at random every time, anyway. So can we control the arrest of one person at a time? the answer is yes.
Through the new LowerQuorumPriorityNodeID attribute in 2012R2, we can control the even number of nodes and let the dynamic arbitration always remove the vote of the specified node, so that when the 50x50 network partition occurs, the site we want will always win, or in the case of two nodes, the final dynamic arbitration will randomly remove the vote of one node, or we can specify in advance according to the situation and always remove the vote of the specified node.
# View the current LowerQuorum node. The default is 0, that is, it is adjusted randomly every time a change occurs.
(Get-Cluster). LowerQuorumPriorityNodeID
# get the node ID, manually set the even number of votes each time to discard the HV04 node vote, and ensure that the Beijing site wins every time
(Get-Cluster). LowerQuorumPriorityNodeID=3
When the network partition occurs again, you can see that the Foshan site is closed, the Beijing site survives, and automatically adjusts to odd voting.
The cluster application is always running normally.
The above are the second and third courses that Lao Wang brought to his friends.
In the second course, in the case of a known site failure, we manually cancelled the vote at the site to prevent the application from migrating to it to provide services. Here we assume a scenario without partition, because the two ends can still do heartbeat detection to each other through the heartbeat network card. We set up a vote cancellation at the Beijing site to cancel the Foshan site. Foshan site can know that if the heartbeat network card is also malfunctioning. That is, there is no way for both sides to carry out heartbeat detection. At this time, we have to look at the situation on a case-by-case basis. If we have set up sites to cancel voting before the partition, then good, the group will choose sites that have not been set up to cancel voting. If the partition has already appeared, then only a small number of sites need to be forced to start, and then set to cancel the voting of other sites after startup. When another site is online, it will be dominated by the compulsory arbitrator.
As for the third course, by default, in the dynamic arbitration scenario witnessing disk failure, the number of votes of one node will be dynamically and randomly removed for us when an even number of voting nodes occur. We can manually drop a site through the LowerQuorumPriorityNodeID attribute to ensure that when an intermediate network partition occurs, the desired site always wins.
So you can see that in the scenario where dynamic arbitration exists, we rarely use compulsory arbitration, because we have many new technologies to choose from, such as LowerQuorumPriorityNodeID, manually adjust the number of votes in advance, compulsory arbitration may also be useful in some scenarios, especially after 2012R2, the technology of preventing arbitration has changed, and the majority node party detects the existence of arbitration by a small number of nodes and automatically performs the action to prevent arbitration. That is, be sure to recognize that the party to the compulsory arbitration is a cluster, and synchronize with its cluster database to the latest before starting its own cluster service. In the previous 2008 era, if you encountered the scenario of compulsory arbitration, you need to perform the blocking arbitration manually most of the time, otherwise there will be situations such as cluster database overwriting, which will automatically help us to do this at 2012R2.
Therefore, Lao Wang's advice to everyone is to choose the appropriate technology according to the scenario, which can be resolved by manually adjusting the vote or through LowerQuorumPriorityNodeID to avoid compulsory arbitration as far as possible. Before using compulsory arbitration in 2012R2, you need to consider the problem of preventing arbitration, but 2012R2 does not need to consider.
In fact, when dynamic arbitration starts, dynamic arbitration in most scenarios can help us ensure the availability of the cluster and keep the cluster voting odd all the time. Even if you are not satisfied with the default site selected by dynamic arbitration, you can also use LowerQuorumPriorityNodeID, vote adjustment, forced arbitration and other technologies to switch to a satisfactory site. For example, the previous 50max 50 brain fissure partition scenario is very difficult to see in the case of dynamic arbitration. So the best way to see a brain fissure scene is to turn off dynamic arbitration.
In the fourth course, we will simulate a brain fissure scene in two sites in Foshan, Beijing. When dynamic arbitration is turned off, witness disks are disabled. When there is a network partition at both ends, both the heartbeat network and the management network have been turned off. There is no way to detect heartbeat between sites.
# confirm the dynamic arbitration status of the cluster. Default is 1 and modified to 0.
(Get-Cluster) .DynamicQuorum = 0
Immediately after we trigger the network partition, we modify the HV03 and HV04 heartbeat network cards to other network segments, and then suspend the routing server, so that both management networks and heartbeat networks cannot do heartbeat detection between sites, but can communicate with AD normally within their respective sites.
At this time, you can see the log on each node, indicating that the cluster network is partitioned.
By running the view voting nodes command, you can see that all the current nodes are trying to form and join the cluster at their respective sites.
But this state lasts for less than 20 seconds. Run the command again and you will see that the cluster service has stopped.
The Syslog about the cluster service stop can be seen in the event manager of each node
It can be seen that when there is a cerebral fissure partition, the cluster will have a short contention phase, and then the cluster arbitration will detect the current partition, and then all cluster nodes will be closed. Lao Wang did not see the scene in which each partition generated a cluster and then competed to write to the cluster disk when the arbitration occurred. Maybe the time is too short to see it, maybe 2012 began to optimize the brain fissure partition. If the brain fissure is detected, it will be closed automatically, but that's what Lao Wang actually saw. I think it's all good to turn it off, don't rob anyone, and let the administrator choose manually.
Suppose the administrator decides that the current authoritative site should be a Beijing site, and manually execute the forced start command on the HV01.
# start HV01 forcefully
Net stop clussvc
Net start clussvc / FQ
HV01 first runs the command to check the number of votes on the node, and you will first see the vote of HV01. Then, since HV02 and HV01 are in a partition, if it detects that this party has a compulsory arbitration, it will first integrate into it, and the status will first be Joining, and finally become Up.
In fact, at this step, Lao Wang hopes to see the effect of something, what, that is, the effect of preventing arbitration. Lao Wang forcibly arbitrates 10% of the party at the ratio of 10 to 90, and when the remaining 90 of the network is online, you can see the log that prevents arbitration in the log, but at the ratio of 50 to 50, when the network of each node returns to normal, Lao Wang does not see the log.
But when Lao Wang tried to check the status of node blocking arbitration with commands, he could see
# check each blocking arbitration status 1 means that the node currently needs to block arbitration, and 0 means that the node currently does not need to block arbitration
(Get-ClusterNode-Name HV02) .NeedsPreventQuorum
(Get-ClusterNode-Name HV03) .NeedsPreventQuorum
(Get-ClusterNode-Name HV04) .NeedsPreventQuorum
You can see that node 2 has removed the state of blocking arbitration, and nodes 3 and 4 still need to block arbitration.
What does it mean to prevent arbitration? Lao Wang also mentioned one point above, to prevent arbitration. To put it simply, it is to let other nodes comply with the technology of compulsory arbitration partition. After we use compulsory arbitration, we will raise the paxos tag of the compulsory arbitration node to the highest level, that is, to become the most authoritative surviving node in the cluster. The cluster databases of all your other nodes should be based on mine. The purpose of preventing arbitration is to configure the goal of compulsory arbitration. When other nodes are connected, they can communicate with the compulsory arbitration node, and the node will detect that there is a node of compulsory arbitration in the current environment. I should take the lead. I should not form a cluster, even if I am a party of most nodes, and then the cluster service will stop temporarily. It is not possible to rejoin the cluster partition formed by the forced arbitration startup node until the cluster database of the quorum node can be prevented from being synchronized with the mandatory quorum party.
Before 2012R2, when you encounter situations where arbitration is blocked, especially in the scenario of 10 Universe 90, you force a few, and you need to manually perform blocking arbitration on other sites. 2012R2 begins to support this kind of detection of compulsory arbitration and automatically block arbitration.
You can see that when the Foshan node resumes communication with the Beijing site, after a period of time, the node's blocking arbitration state also becomes 0, and the cluster partition formed by the compulsory arbitrator is added online normally.
For the last course, let's try a reverse scenario. Suppose there are 1 node in Beijing site and 3 nodes in Foshan site. There is a network failure in Foshan site, although three nodes in Foshan can communicate, but they can not provide services to the outside. We need to force the reverse to Beijing node to provide services. In this scenario, the disk is disabled and the cluster enables dynamic arbitration.
# re-enable cluster dynamic arbitration
(Get-Cluster) .DynamicQuorum = 1
Modify HV02 to 100.0.0.3, gateway to 100.0.254 and DNS to 100.0.0.20 80.0.0.1
# re-register the Nic DNS record
Ipconfig / registerdns
Make sure that the new address of HV02 is recorded on the DNS of both sites
Directly change the cluster network cards of the three nodes of HV02 HV03 HV04 to the same network, and then pause the routing server
At this time, you can see that the cluster service has been stopped by running the command on HV01 to check the voting number of cluster nodes.
If you run the command to view the cluster vote on the HV03, you can see that the current cluster has been successfully formed in the Foshan site, because this side occupies a large number of votes, so the cluster on the Beijing site will now be shut down.
# use commands to force arbitration to start clusters on Beijing sites
Net stop clussvc
Net start clussvc / FQ
After the startup, you can see that the voting status of the cluster node can now be seen on the Beijing site. At the same time, the paxos flag has been promoted, and the cluster database of HV01 has been updated to the latest.
Since HV02 HV03 HV04 is not connected to HV01 yet, there is no way to know that a compulsory arbitration has taken place on the Beijing site, so we will also try to form a cluster. Check the voting status of the cluster on HV03, and you can see whether it is still an old record.
At this time, we need to do one thing on the Beijing site, because both the Beijing site and the Foshan site can still access storage, so when the application is online in the Beijing site, it may fail. Because three nodes in Foshan are also competing with me for cluster disks, it is urgent to cancel the voting count of Foshan site and prevent it from grabbing resources with Beijing site.
# manually cancel the number of votes cast at Foshan site
(Get-ClusterNode-Name hv02) .NodeWeight = 0
(Get-ClusterNode-Name hv03) .NodeWeight = 0
(Get-ClusterNode-Name hv04) .NodeWeight = 0
It should be noted that if it is necessary to remove the node vote, it must be carried out in the compulsory arbitrator, because once it is performed in most nodes, the record of the compulsory arbitrator will be overwritten when the network communication is repeated. It won't work.
After that, we put the cluster application online again, and we can see that at this time, HV01 has actually grabbed the cluster disk from the Foshan site, and the three sites in Foshan can no longer seize the cluster resources, because the cluster disk has been re-online on the authority side, and now the cluster application can always run on the Beijing site.
When you try to join the cluster partition of the Beijing site after the network repair of the three sites in Foshan, you can see the log preventing arbitration in the log. The current node status will not be UP. Only when the three sites in Foshan recognize the Beijing site as the authoritative party, and the cluster database has been synchronized with the Beijing site, can you join the cluster normally.
After a period of time, we can see that all the Foshan nodes have completed the blocking arbitration and joined the cluster normally. Finally, we re-assign votes to the Foshan nodes, so that the Foshan nodes can participate in the failover normally. After giving the vote, the cluster arbitration will detect the current recovery of four nodes, and will automatically select the nodes to remove one vote according to the dynamic arbitration mechanism to achieve always odd voting.
The above is Lao Wang's experience through practice. When writing this article, Lao Wang's vision is to repeatedly verify and understand the techniques of dynamic arbitration, dynamic witness, vote adjustment, LowerQuorumPriorityNodeID, and preventing arbitration through constant experiments, and then speak out as vividly as possible through the form of the scene, telling friends what the specific functions are and how to use them. Hope to be able to use more people to know these technologies of 2012R2 clustering, more and more people come to really use the cluster, study the cluster, the cluster is not just a heartbeat, a storage, connect the nodes to the end, there are a lot of things are worth our efforts to study, pay attention to the research technology, will naturally find fun.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.