In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail the example analysis of RHEL 5.5+DRBD+heartbeat+Oracle10R2 dual-computer installation. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
1. Operating system version: Red Hat Enterprise Linux Server release 5.5 (Tikanga)
2. Drbd, Heartbeat file version and file name list (I have packaged the following files as Heartbeat3.0.3.tar.bz2):
Cluster-Resource-Agents-agents-1.0.3.tar.bz2
Drbd-8.3.8.1.tar.gz
Heartbeat-3-0-STABLE-3.0.3.tar.bz2
Load drbd modules
Pacemaker-1-0-Pacemaker-1.0.9.tar.bz2 Pacemaker-Python-GUI-pacemaker-mgmt-2.0.0.tar.bz2
Reusable-Cluster-Components-glue-1.0.6.tar.bz2
3. Network configuration (dual network cards adopt BOND mode)
After the installation of the RHEL5.5 system, you need to modify the network configuration. First, set the eth0 and eth2 in each node to static IP. Modify the node hosts file, as shown in the figure
At the same time, modify each node / etc/sysconfig/network file to change the content of the HOSTNAME line to the corresponding hostname of the node, as shown in the figure (take node2 as an example)
4. Node name and storage allocation
A, hostname and IP
= Node2====Hostname: node2.localdomainIp:10.109.1.38 = Node3====Hostname: node3.localdomainIp: 10.109.1.39
B. DRBD image partition:
Resource name: oradata device: / dev/drbd0
Mount point: / oradata (stores Oracle instances)
C, floating hostname and IP
= Node1====Hostname: node1.localdomainIp: 10.109.1.37
5. Install Heartbeat
Enter the Linux root directory
Cd /
Set up a HA directory
Mkdir Ha
Upload Heartbeat3.0.3.tar.bz2 files to the HA directory
Enter the HA directory
Cd / HA/
5.1Unpack the Heartbeat package to get the installation files needed for subsequent installation
Tar-jxvf Heartbeat3.0.3.tar.bz2
The order of compilation is: first Cluster Glue, then Resource Agents, then Heartbeat.
Decompress Reusable-Cluster-Components
Tar-jxvf Reusable-Cluster-Components-glue-1.0.6.tar.bz2
Enter the Reusable-Cluster-Components-glue-1.0.6 directory
Cd Reusable-Cluster-Components-glue-1.0.6
Open the lib/stonith/main.c file
Vi lib/stonith/main.c
Edit:
Find its 64 lines and comment it out.
Find it and comment out lines 76 to 81.
Find its line 390 and comment it
Use the following two for configuration
. / autogen.sh./configure LIBS='/lib/libuuid.so.1'
Create an Heartbeat administrative user using the following command:
Groupadd haclient useradd-g haclient hacluster
Compile and install using the following command:
Makemake install
5.2 decompress Cluster-Resource-Agents
Tar-jxvf Cluster-Resource-Agents-agents-1.0.3.tar.bz2
Enter the Cluster-Resource-Agents-agents-1.0.3 directory
Cd Cluster-Resource-Agents-agents-1.0.3
Configure, compile, and install using the following command
. / autogen.sh. / configuremakemake install
5.3 decompress Heartbeat-3-0-STABLE
Tar-jxvf Heartbeat-3-0-STABLE-3.0.3.tar.bz2
Enter the Heartbeat-3-0-STABLE-3.0.3 directory
Cd Heartbeat-3-0-STABLE-3.0.3
First execute the following command to configure
. / autogen.sh. / bootstrap. / ConfigureMe configuremake
At this point, the system will report a hbaping.lo error. We need to rename the hbaping.loT file using the following set of commands:
Cd liblscd plugins/lscd HBcommmv hbaping.loT hbaping.lo
After that, execute the following two commands again for installation, and you should not report an error.
Makemake install
Use the cd / usr/etc/ command to enter the / usr/etc/ directory
Copy all / usr/etc/ha.d to the / etc/ directory using the cp-R ha.d/ / etc/ command
Use rm-rfv ha.d to delete the entire ha.d directory in / usr/etc/
Use the cd / etc/ command to enter the / etc/ directory
Use the ln-s / etc/ha.d / usr/etc/ha.d command to create a soft connection file from / etc/ha.d to / usr/etc/ha.d.
5.4 decompress Pacemaker-1-0
Tar-jxvf Pacemaker-1-0-Pacemaker-1.0.9.tar.bz2
Enter the Pacemaker-1-0-Pacemaker-1.0.9 directory
Cd Pacemaker-1-0-Pacemaker-1.0.9
Execute the following command to configure, compile, and install
. / autogen.sh. / ConfigureMe configuremakemake install
5.5 decompress Pacemaker-Python-GUI
Tar-jxvf Pacemaker-Python-GUI-pacemaker-mgmt-2.0.0.tar.bz2
Enter the Pacemaker-Python-GUI-pacemaker-mgmt-2.0.0 directory
Cd Pacemaker-Python-GUI-pacemaker-mgmt-2.0.0
First execute the following command
. / bootstrap
Use the rpm command to install the gettext-devel and intltool packages on the RHEL5.5 installation CD as follows:
Cd / media/RHEL_5.5\ i386\ DVD/Server/ rpm-ivh gettext-devel-0.14.6-4.el5.i386.rpm rpm-ivh intltool-0.35.0-2.i386.rpm
Then enter the Pacemaker-Python-GUI-pacemaker-mgmt-2.0.0 directory again
Cd Pacemaker-Python-GUI-pacemaker-mgmt-2.0.0
Execute the following command:
. / ConfigureMe configure autoreconf-ifs. / bootstrapmakemake install
Use the passwd command to set the hacluster user password
Copy hbmgmtd to the / etc/pam.d/ directory
Cp / usr/etc/pam.d/hbmgmtd / etc/pam.d/
6. Install DRBD
Use tar zxvf drbd-8.3.8.1.tar.gz to extract the file
Use cd / media/RHEL_5.5\ i386\ DVD/Server/ to enter the CD mount directory
Use rpm to install kernel-related source packages in turn
Rpm-ivh kernel-devel-2.6.18-194.el5.i686.rpm rpm-ivh kernel-headers-2.6.18-194.el5.i386.rpm rpm-ivh kernel-doc-2.6.18-194.el5.noarch.rpm
Use the cd drbd-8.3.8.1 command to enter the drbd-8.3.8.1 directory and execute the following commands to configure, compile, and install
. / autogen.sh. / configure-- prefix=/usr-- localstatedir=/var-- sysconfdir=/etc/-- with-kmmakemake install
Use the chkconfig-- add drbd command to create a drbd service startup script
Use the chkconfig-- add heartbeat command to create a heartbeat service startup script
Use the chkconfig heartbeat off command to shut down the heartbeat service
Use the chkconfig drbd off command to shut down the drbd service
Use the cat Load\ drbd\ modules > > / etc/rc.d/rc.sysinit command to add the contents of Load drbd modules to the * * section of the rc.sysinit system file, so that the drbd.ko driver module can be automatically loaded into the core when the system starts, and the drbd service can be used normally. This step-by-step needs to be omitted in rhel5.5, otherwise the drbd service will not start normally.
7. Configure DRBD
Modify the parameter of usage-count in the DEBD configuration file / etc/drbd.d/global_common.conf of each node to no, as shown in the figure:
Save and exit when you are finished.
Create the file / etc/drbd.d/oradata.res in the host of each node, and add the following to the oradata.res file:
The name of the resource oradata {# resource group protocol C; startup {degr-wfc-timeout 120; # 2 minutes. Timeout for connecting to other nodes at startup} disk {on-io-error detach; # when there is an error in the disk, do not connect} net {} syncer {rate 10m; # set the network rate when the primary and standby nodes are synchronized * value al-extents 257;} on node2.localdomain {# node hostname device / dev/drbd0 # device to be used in the future disk / dev/vda5; # Partition number address 10.109.1.38 meta data 7788 on this node for storing data; # IP address of this node meta-disk internal; # how to store information} on node3.localdomain {device / dev/drbd0; disk / dev/vda5; address 10.109.1.39 address 7788; meta-disk internal }}
Such as the legend:
7.3. Initialize the partition
Execute the drbdadm create-md oradata command on each node to initialize the partition (create meta data information), where oradata is the name of the resource group in the configuration file.
The startup service starts the drbd service on two node servers. As shown in the figure:
Then use cat / proc/drbd or service drbd status to check the current status. The following information shows that the DRBD service has been started normally, as shown in the figure:
Note that both machines are in Secondary, that is, standby state, and data synchronization is also performed.
7.5.Setting the primary host
Execute on the machine that is confirmed as the master data server:
[root@node1 ~] # drbdadm adjust oradata [root@node1 ~] # drbdsetup / dev/drbd0 primary-o
In this way, the node1 will be used as the host, and the data in the vda5 will be synchronized to the node2 in blocks. You can view the status again:
[root@node1] # cat / proc/drbdversion: 8.3.8 (api:88/proto:86-94) GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by root@hatest1, 2010-07-07 08 proc/drbdversion 59 GIT-hash 440: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent Murray-ns:719756 nr:0 dw:0 dr:720896 al:0 bm:43 lo:0 pe:62 ua:36 ap:0 ep:1 wo:b oos:1378556 [= >.] Sync'ed: 34.4% (1378556 / 2096348) K delay_probe: 149 finish: 0:04:59 speed: 4580 (7248) K/sec [root@node2 ~] # cat / proc/drbdversion: 8.3.8 (api:88/proto:86-94) GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by root@hatest1 2010-07-07 08 59 440: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate Kormurmuri-ns:0 nr:752096 dw:751584 dr:0 al:0 bm:45 lo:17 pe:49 ua:16 ap:0 ep:1 wo:b oos:1344764 [= >.] Sync'ed: 36.0% (1344764 want 2096348) K queue_delay: 2.9 ms finish: 0:02:11 speed: 10224 (10020) want: 10240 K/sec
From the blue comparison, you can distinguish the location of the host in the DRBD cluster. You can also confirm using the following command:
[root@node1 ~] # drbdadm role oradataPrimary/Secondary [root@node2 ~] # drbdadm role oradataSecondary/Primary
Set the drbd service to self-starting mode:
[root@node1 ~] # chkconfig-- level 235 drbd on [root@node2 ~] # chkconfig-- level 235 drbd on
At this point, the mirror partition has been created.
When the synchronization is completed, the drbd status of the two machines will change to:
[root@hatest1 ~] # cat / proc/drbdversion: 8.3.8 (api:88/proto:86-94) GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by root@hatest1 2010-07-07 08 cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate Murray-ns:2096348 nr:0 dw:0 dr:2096348 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 [root@hatest2] # cat / proc/drbdversion: 8.3.8 (api:88/proto:86-94) GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by root@hatest1 2010-07-07 08 59 440: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate Kormurmuri-ns:0 nr:2096348 dw:2096348 dr:0 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
7.6. The processing of split brain in DRBD
Split brain actually means that in some case, two nodes of the drbd are disconnected and both are running in Primary state. This usually occurs when the primary node is disconnected and the standby node manually modifies the data due to meta data data inconsistency. When a primary node in drbd connects to the other node and is ready to send a message, if it is found that the other node is also in primary status, it will disconnect immediately and assume that split brain has occurred. At this time, he will record the following information in the system log: "Split-Brain detected,dropping connection!" When split brain occurs, if you check the connection status, at least one of them will be StandAlone status, and the other may also be StandAlone (if split brain status is also found), or WFConnection status.
DRBD can set the handling mechanism when split brain occurs in the configuration file, but this may not be consistent with the actual situation and is not recommended. If the split brain automatic solution is not configured, we can deal with it manually.
First of all, we have to determine which side should be the primary after solving the problem (that is, the side with * data). Once this is determined, we are also sure to accept all the data changes that were lost on another node after the split brain. When these are determined, we can recover by doing the following:
(1) first switch to secondary on the node determined to be the secondary and discard the data of the resource:
Drbdadm disconnect resource_namedrbdadm secondary resource_namedrbdadm-discard-my-data connect resource_name
(2) reconnect secondary to the node to be used as primary (if the node's current connection status is WFConnection, it can be omitted)
Drbdadm connect resource_name
After these actions are completed, the re- synchnorisation from the new primary to secondary will automatically start (resynchronize).
7.7. Format the partition
7.7.1, similar to soft RAID, LVM, etc., the mirror partition to be created with DRBD is not directly used by the / dev/vda5 device, but the / dev/drbd0 specified in the configuration file. Similarly, you don't have to wait for initialization to complete before using drbd0 provisioning.
[root@node1 ~] # drbdadm role oradataPrimary/Secondary [root@node1 ~] # mkfs.ext3 / dev/drbd0 [root@node1 ~] # tune2fs-c 0-I 0 / dev/drbd0
7.7.2 considerations
It should be noted that drbd0 devices can only be used on one side of the Primary, and the following operations will report errors:
[root@node2 ~] # mount / dev/vda5 / oradatamount: / dev/vda5 already mounted or / oradata busy [root@node2 ~] # drbdadm role oradataSecondary/Primary [root@node2 ~] # mount / dev/drbd0 / oradata/mount: block device / dev/drbd0 is write-protected, mounting read-onlymount: wrong media type
In addition, in order to avoid misoperation, when the machine is restarted, it is in the Secondary state by default. If you want to use a drbd device, you need to set it to Primary manually.
7.7.3, mount
First mount the drbd0 device to the / oradata directory:
[root@hatest1 ~] # mount / dev/drbd0 / oradata [root@hatest1 ~] # df-h / oradata file system capacity available available mount point / dev/drbd0 2.0G 36M 1.9G 2% / oradata
8. Install ORACLE10.2
8.1. Configure Linux kernel parameters in each node.
After logging in as root, go to the etc directory, open the sysctl.conf file, and write the following to the location in the figure:
Kernel.shmall = 2097152kernel.shmmax = 1717986918kernel.shmmni = 4096kernel.sem = 32000 100100 128fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000net.core.rmem_default = 262144net.core.rmem_max = 262144net.core.wmem_default = 262144net.core.wmem_max = 262144
These parameters can be filled in according to the list, and the shmmax (red mark) in these parameters has its own calculation method: the memory is in G units, which is converted to Byte units multiplied by 80%, for example, 2G memory, the conversion formula is
2 "1024" 1024 "1024" 80% "1717986918
Create the user name and user group required by the oracle installation, and modify the oracle user environment variable (that is, modify the. bash_profile file under the oracle user directory)
8.2.1. Execute the following groupadd oinstall, groupadd dba, useradd-m-g oinstall-G dba oracle commands in the two nodes to create an oracle user, as shown in the figure
8.2.2. Modify the oracle environment variable in each node, as the oracle user, open the .bash _ profile file and add the following to the environment variable, as shown in the figure
Export ORACLE_BASE=/oradataexport ORACLE_HOSTNAME=node1.localdomainexport ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1export ORACLE_SID=orclexport ORACLE_TERM=xtermexport NLS_LANG=american_america.ZHS16GBK;export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/dataLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/libexport LD_LIBRARY_PATHexport PATH=$PATH:$ORACLE_HOME/bin
Create an ORACLE installation mount point
Use the following command on each oracle installation node to create a DRBD resource mount point and modify the mount point group
[root@node1 ~] # cd / [root@node1 /] # mkdir oradata [root@node1 /] # chown-R oracle:oinstall oradata
Change the IP address to floating IP and set the DEBD resource to primary
On the machine where oracle10G2 is installed, you first need to modify the IP address and hostname to the future floating IP and hostname (this is mainly for the smooth switching and normal startup of oracle in the future dual-computer application). Execute the drbdadm primary oradata command to set the DRBD resource to primary as shown in the figure:
Execute drbdadm role oradata to check the status, as shown below:
8.5.Mount DRBD resources and modify resource group
Execute the command mount / dev/drbd0 / oradata to mount DRBD resources, as shown in the figure:
Then execute the mount command to view the information, as shown in the figure
The line / dev/drbd0 on / oradata type ext3 (rw) appears in the message, indicating that the resource is mounted normally, and then executed
Chown-R oracle:oinstall oradata
Command to modify the oradata group and use ls-l to view the information, as shown in the figure:
8.6. Install oracle10G2 database
You can view other documents for details.
8.7, * configuration
8.7.1. Modify the listenter.ora file and add the following content to solve the problem of ORA-12514 error caused by switching between two computers.
(SID_DESC = (GLOBAL_DBNAME = orcl) (ORACLE_HOME = / oradata/product/10.2.0/db_1) (SID_NAME = orcl)
As shown in the figure:
8.7.2. Modify the hostname to the original node hostname and IP.
8.7.3. Terminate each oracle process of the active oracle node, and uninstall the oracle resources
Mount the resource in another node and start the oracle application process and test it. If there is no problem, you can configure HA.
9. Configuration of heartbeat
9.1. Configure authkeys
Here I use random numbers to get the key, and the command is as follows:
# (echo-ne "auth 1\ N1 sha1"; dd if=/dev/urandom bs=512 count=1 | openssl md5) > / etc/ha.d/authkeys# cat authkeys# chmod 600 / etc/ha.d/authkeys
The effect is as shown in the figure:
9.2. Configure ha.cf
Use the command vi / etc/ha.d/ha.cf to edit the configuration file to read as follows:
Debugfile / var/log/ha-debuglogfile / var/log/ha-loglogfacility local0auto_failback nonemcast eth0 239.0.0.43 694 1 0udpport 694bcast eth0deadtime 30initdead 30keepalive 2node node2.localdomainnode node3.localdomaincompression bz2compression_threshold 2crm respawnapiauth mgmtd uid=rootrespawn root / usr/lib/heartbeat/mgmtd-v
Then save and exit. As shown in the picture
9.3. Synchronize node HA configuration files
Execute the # / usr/share/heartbeat/ha_propagate command, and security prompts for the root account password of the synchronization node host, as shown in the figure
9. Start heartbeat
Start heartbeat on both nodes using the following command:
# service heartbeat start
As shown in the figure:
9.5. Configure HA resources for DRBD+Oracle.
9.5.1, execute orders
# crm_attribute-t crm_config-n stonith-enabled-v false
Or
# crm configure property stonith-enabled= "false"
Turn off STONITH support for heartbeat to avoid the problem that resources in the cluster cannot be started when stonith is enabled and there are no stonith resources in the cluster.
9.5.2. Empty the old configuration file and submit the following commands interactively with crm:
# crmcrm (live) # configurecrm (live) configure# erasecrm (live) configure# commitcrm (live) configure# exit
Empty the old configuration file.
9.5.3. Close quorum
There is the concept of quorum in HA, that is, half of the nodes in the cluster must be in the state of online, then the cluster is considered to be have quorum (which can be considered to meet the requirement of legal number of nodes). If less than half of the nodes are online, HA believes that the cluster does not meet the node count requirement and refuses to start the resources in the cluster. However, this strategy is obviously unreasonable for two-node clusters, so it will happen that all clusters cannot be started when one of the two-node clusters fails.
Similarly, close STONITH and execute the following two commands to turn off quorun and STONITH support
# crm configure property no-quorum-policy=ignore# crm configure property stonith-enabled= "false"
9.5.3. Use pacemaker for HA resource configuration
Set DRBD as the active and standby resources, put other oracle resources in the same group, and let the two resources run together through "sequence", "coordination" and other restrictions. According to the monitoring situation, add operations such as start timeout, monitor interval and so on.
Enter crm interaction mode:
# crm configurecrm (live) configure#
Then enter the following in the configure state:
Primitive drbd_oracle ocf:linbit:drbd\ params drbd_resource= "oradata"\ op monitor interval= "15s" primitive fs_oracle ocf:heartbeat:Filesystem\ params device= "/ dev/drbd/by-res/oradata" directory= "/ oradata" fstype= "ext3" primitive ip_oracle ocf:heartbeat:IPaddr2\ params ip= "10.109.1.37" nic= "bond0" cidr_netmask= "24" primitive oracle_instant ocf:heartbeat:oracle\ op Monitor interval= "120" timeout= "30"\ op start interval= "0" timeout= "120"\ params sid= "orcl" primitive oracle_lsnrctl ocf:heartbeat:oralsnr\ params sid= "orcl"\ operations $id= "oracle_lsnrctl-operations"\ op monitor interval= "10" timeout= "30" primitive route_oracle ocf:heartbeat:Route\ operations $id= "route_oracle-operations"\ params destination= "0.0.0.0 / 0 "gateway=" 10.109.1.1 "group group_oracle ip_oracle route_oracle fs_oracle oracle_lsnrctl oracle_instant\ meta target-role=" Started "is-managed=" true "ms ms_drbd_oracle drbd_oracle\ meta master-max=" 1 "master-node-max=" 1 "\ clone-max=" 2 "clone-node-max=" 1 "notify=" true "colocation oracle_on_drbd inf: group_oracle ms_drbd_oracle:Masterorder Oracle_after_drbd inf: ms_drbd_oracle:promote group_oracle:start
* you can submit it with commit.
"description:
A. According to the official website of DRBD, ocf:heartbeat:drbd has been discarded and is not recommended, so use ocf:linbit:drbd instead.
B. Set RA of IP, use ocf:heartbeat:IPaddr2, and set virtual IP with ip command. After the virtual IP takes effect, it can not be seen with ifconfig command, but can be viewed with ip addr.
C. When you enter the above command, you may be prompted to warn that the timeout of start and stop is less than the recommended value, etc., which can be added to the "operation" according to the actual environment in which the application starts and stops (see oracle_instant resources)
D and ms are used to set "active and standby resources"
E, colocation is to set the "synergy" restriction, that is, group_oracle and ms_drbd_oracle must run on the same machine, and if ms_drbd_oracle cannot run as Master, group_oracle will not run. On the contrary, the state of group_oracle will not affect ms_drbd_oracle.
G, order is to set the "order" restriction, that is, activate the ms_drbd_oracle resource (set the drbd device to the primary state), and then start the group_oracle group funding source
F. During the mount operation, / dev/drbd/by-res/oradata is a link to / dev/drbd0 created by drbd for convenience.
H. If the command you enter is long, you can use "\" to continue on the next line, but it must be noted that the space in front of the next line can only use spaces, not characters such as Tab.
After the configuration is submitted, the two resources will run automatically (there is a delay according to the global configuration), or you can manually start the resources with the following command:
# crm resource start group_oracle
9.6. Management commands of HA
9.6.1. Check the HA status and execute the following command:
# crm status
The execution effect is as shown in the figure:
9.6.2. Switch manually and execute the following command
# crm resource migrate group_oracle node2.localdomain
As shown in the picture
All the resources are on the node3.localdomain before the shutdown. If you execute the crm status command again after executing the command, you can see that all the resources have been taken over by node2.localdomain. As shown in the picture
9.7. Maintenance
Sometimes, we need to maintain the current host. At this time, we can first migrate the resources to the slave, and then set the host to the "unmanaged" standby state, as shown in the figure:
Click standby in the figure to set the selected host to a "non-administrative" state, as shown in the figure:
After that, you can turn off the heartbeat service on the host, or even shut down the machine or do maintenance work.
This is the end of this article on "sample Analysis of RHEL 5.5+DRBD+heartbeat+Oracle10R2 dual-computer installation". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.