In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to install DKHadoop. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
Part I: preparatory work
1. Configuration required for big data platform:
(1) system: CentOS 6.564 bit (Desktop is required to be installed by default)
(2) CPU: intel E3 and above.
(3) memory: a minimum of 8 gigabytes is recommended, 32 gigabytes are recommended and 128 gigabytes are recommended.
(4) hard disk: 256g or above, solid state disk is recommended.
(5) system partition requirements: if there are no special needs, except for swap partition, all remaining space can be allocated to / (root) partition
(6) Network requirements: if there are no special circumstances, you can access the external network as far as possible.
(7) at least three servers (the names of three servers are arbitrary, and the password must be the same).
(8) install using root user as default user
2. Remote upload and remote connection tools:
(1) if you directly use the virtual machine enabled by the personal PC as the basis of the server, you can directly copy it to the virtual machine.
(2) if you use the server in the computer room as the platform, if you do not have direct access to the server, you need to use remote tools to connect and send the installation package.
Part II: server operating system configuration
1. Modify permissions
Step: after copying the installation package DKHPlantform.zip to the master node / root/ directory in preparation, extract and modify the file permissions (the file owner is readable, writable and executable, other users belonging to a user group are readable and executable, and other user groups are readable and executable).
Command:
Cd / root/
Unzip DKHPlantform.zip
Chmod-R 755 DKHPlantform
two。 Set up hadoop cluster and set up SSH secret-free login
Steps:
Modify hostname vi / etc/sysconfig/network reboot
(1) modify the local hosts file and write the corresponding relationship
Command:
Vi / etc/hosts
Enter edit mode by pressing insert or I key on the keyboard, press Esc key after editing, then press Shift+: key, enter wq and enter to save. Enter Q! After entering the car, you give up saving and exit.
After entering the editing mode, write the corresponding relationship between the host and the ip according to the rules (the host name dk41 is named by yourself, as shown below):
192.168.1.41 dk41
192.168.1.42 dk42
192.168.1.43 dk43
After editing, save and exit. Copy the correspondence to the other two machines.
Command:
Scp-r / etc/hosts 192.168.1.42:/etc
Scp-r / etc/hosts 192.168.1.43:/etc
(2) perform pre-secret preparation between clusters.
A) when you execute the sshpass.sh script, you will read the sshhosts and sshslaves files, replacing the master and slave files in sshpass.sh.
Modify the file sshhosts and enter the hostnames of all machines, one for each line (as shown below)
Command:
Vi / root/DKHPlantform/autossh/sshhosts
Enter edit mode by pressing insert or I key on the keyboard, press Esc key after editing, then press Shift+: key, enter wq, and then enter to save. Enter Q! After entering the car, you give up saving and exit.
Modify the file sshslaves to write all machine names except the hostname (figure below)
Command:
Vi / root/DKHPlantform/autossh/sshslaves
Enter edit mode by pressing insert or I key on the keyboard, press Esc key after editing, then press Shift+: key, enter wq, and then enter to save. Enter Q! After entering the car, you give up saving and exit.
B) execute insExpect.sh, the system will install two rpm packages, no need to enter password and yes during the execution of sshpass.sh, and the script will continue to execute automatically.
Command:
Cd / root/DKHPlantform/autossh
. / insExpect.sh
Follow the prompts to enter the yes and password multiple times (as shown below)
C) execute the changeMaster.sh script in order to empty all files in the / root/.ssh directory (in the / root/DKHPlantform/autossh directory) and avoid conflicts with the old keys when the new keys are generated during the execution of the sshpass.sh scripts.
Command:
. / changeMaster.sh
Follow the prompts to enter (as shown below)
(3) perform cluster confidentiality (SSH)
A) execute SSH:
Command:
Cd / root/
. / sshpass.sh password
123456 is the cluster password, enter it according to your actual situation.
B) to prevent some services from being blocked when accessing the server, the firewall needs to be turned off.
Command:
Cd / root/DKHPlantform/autossh
. / offIptables.sh
3. Install MySQL with dual hot backup
Objective: to store the metadata of Hive
Steps:
(1) distribute the mysql installation directory from the primary node to the second node
Command:
Scp-r / root/DKHPlantform/mysqlInst/ 192.168.1.42:/root/
(2) the primary node executes:
Command:
Cd / root/DKHPlantform/mysqlInst/
. / mysql.sh 1
Ssh goes to the second machine (slave node) and executes:
Command:
Cd / root/mysqlInst/
. / mysql.sh 2
(3) perform hot backup after successful execution (on both machines, the two ip are exchanged, 41 is written on 42, 42, 41, and the password is Mysql: 123456. It has been set in the platform, please do not modify it):
Command:
Source / etc/profile
. / sync.sh 192.168.1.xxx (another mysql address) root 123456
4. Create a database
Steps:
(1) Import the MySQL data table and execute it only on the master node:
Command:
Mysql-uroot-p123456 < {here is the sql file, and the file in the home directory: dkh.sql}
Such as: mysql-uroot-p123456
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.