In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Cloudera Manager
Cloudera Manager is divided into two parts: CDH and CM.
CDH is the abbreviation of Cloudera Distribution Hadoop. As the name implies, it is the Hadoop version released by cloudera, which encapsulates Apache Hadoop and provides all the services of Hadoop, including HDFS,YARN,MapReduce and various related components:HBase, Hive, ZooKeeper,Kafka and so on.
CM is the abbreviation of cloudera manager and the management platform of CDH, which mainly includes CM server and CM agent. Through CM, you can configure CDH, monitor, alarm, view log, dynamically add and delete various services, and so on.
First, prepare the working environment JDK:1.8 centos:7.3 operating system: CentOS 6JDK version: 1.7.00080 required installation package and version note: because our operating system is CentOS7 The following files need to be downloaded: cloudera-manager-centos7-cm5.12.1_x86_64.tar.gzCDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcelCDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel.sha1manifest.json
Cloudera Manager download directory
Http://archive.cloudera.com/cm5/cm/5/
CDH download directory
Http://archive.cloudera.com/cdh6/parcels/5.12.1/
Manifest.json download
Http://archive.cloudera.com/cdh6/parcels/5.12.1/manifest.json
CHD5-related Parcel packages are placed in the / opt/cloudera/parcel-repo/ directory of the primary node
CDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel.sha1 renamed to CDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel.sha
This must be noted, otherwise, the system will download the CDH-5.12.1-1.cdh6.12.1.p0.3-el6.parcel file again.
This article uses offline installation, online installation, please refer to the official text
Hostname ip address installation service node1 (Master) 192.168.252.121jdk, cloudera-manager, MySqlnode2 (Agents) 192.168.252.122jdk, cloudera-managernode3 (Agents) 192.168.252.123jdk, cloudera-managernode4 (Agents) 192.168.252.124jdk, cloudera-managernode5 (Agents) 192.168.252.125jdk, cloudera-managernode6 (Agents) 192.168.252.126jdk, cloudera-managernode7 (Agents) 192.168.252.127jdk, cloudera-manager II, system environment setup 1, network configuration (all nodes)
Modify hostname
Command format
Hostnamectl set-hostname
Modify all nodes node in turn [1-7]
Hostnamectl set-hostname node1
Restart the server
Reboot
Modify mapping relationship
1. Add the following under the / etc/hosts file of node1
$vi / etc/hosts
two。 View the contents of the modified / etc/hosts file
[root@node7 ~] # cat / etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.252.121 node1192.168.252.122 node2192.168.252.123 node3192.168.252.124 node4192.168.252.125 node5192.168.252.126 node6192.168.252.127 node72, SSH password-free login
1. Remove the comments for the following options from the / etc/ssh/sshd_config file of the cluster node1
Vi / etc/ssh/sshd_config RSAAuthentication yes # enable private key authentication PubkeyAuthentication yes # enable public key authentication
two。 The modified / etc/ssh/sshd_config of the cluster node1 is copied and sent to each node of the cluster by the scp command.
For an in {2... 7}; do scp / etc/ssh/sshd_config node$a:/etc/ssh/sshd_config; done
3. Generate public key and private key
1. Enter the command ssh-keygen-t rsa-P''in each node of the cluster to generate key and enter.
Ssh-keygen-t rsa-P''
4. Enter the command in the node1 node of the cluster
Put the public key id_rsa.pub of each node in the cluster into its own authentication file authorized_keys
For an in {1... 7}; do ssh root@node$a cat / root/.ssh/id_rsa.pub > > / root/.ssh/authorized_keys; done
5. Enter the command in the node1 node of the cluster
Copy and send your own authentication file authorized_keys to each node through the scp command: / root/.ssh/authorized_ Keys`
For an in {1... 7}; do scp / root/.ssh/authorized_keys root@node$a:/root/.ssh/authorized_keys; done
6. Enter commands in each node of the cluster
Restart the ssh service
Sudo systemctl restart sshd.service
7. Verify ssh unencrypted login
Open another window to test whether you can log in without secret.
For example: in node3
Ssh root@node2
Exit exit
3. Turn off firewall systemctl stop firewalld.service4 and SELINUX.
View
[root@node1 ~] # getenforceEnforcing [root@node1 ~] # / usr/sbin/sestatus-vSELinux status:
Temporarily Closed
# # set SELinux to permissive mode # # setenforce 1 set SELinux to enforcing mode setenforce 0
Permanent shutdown
Vi / etc/selinux/config
Change SELINUX=enforcing to SELINUX=disabled
Restart is required to take effect after setting.
PS I modified the / etc/selinux/config of node1 and copied the configuration file to another node
For an in {2... 7}; do scp / etc/selinux/config root@node$a:/etc/selinux/config; done
Restart all nodes
Reboot5, install JDK
To download jdk1.8 in Linux environment, please go to (official website) to download the installation file of jdk
My link on Baidu Cloud disk: http://pan.baidu.com/s/1jIFZF9s password: u4n4
Upload to / opt directory
Decompression
Cd / opttar zxvf jdk-8u144-linux-x64.tar.gzmv jdk1.8.0_144/ / lib/jvm
Configure environment variables
Vi / etc/profile#jdkexport JAVA_HOME=/lib/jvmexport JRE_HOME=$ {JAVA_HOME} / jre export CLASSPATH=.:$ {JAVA_HOME} / lib:$ {JRE_HOME} / lib export PATH=$ {JAVA_HOME} / bin:$PATH
Make the environment variable effective
Source / etc/profile
Verification
[root@localhost ~] # java-versionjava version "1.8.0,144" Java (TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot (TM) 64-Bit Server VM (build 25.144-b01, mixed mode) 6, set NTP
All nodes install NTP
Yum install ntp
Set up synchronization
Ntpdate-d 182.92.12.117, install and configure MySql
Install MySql on the primary node
MySQL depends on the libaio library
Yum search libaioyum install libaio
Download, extract, rename
Usually decompress at / usr/local/mysql
Rename the mysql-5.7.19-linux-glibc2.12-x86_64 folder to mysql so that it forms the / usr/local/mysql directory
Cd / opt/wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.19-linux-glibc2.12-x86_64.tar.gztar-zxvf / opt/mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz-C / usr/local/mv / usr/local/mysql-5.7.19-linux-glibc2.12-x86_64/ / usr/local/mysql
1. New user groups and users
Groupadd mysqluseradd mysql-g mysql
two。 Create a directory and authorize
Cd / usr/local/mysql/ mkdir data mysql-fileschmod 750mysql-fileschown-R mysql. Chgrp-R mysql.
3. Initialize MySQL
Bin/mysqld-- initialize-- user=mysql # MySQL 5.7.6 and up
4. Pay attention to the password mysql temporary password
[note] root@localhost generates the temporary password:; btincs;) / rn6A3, that is, the string after root@localhost:
2017-09-24T08:34:08.643206Z 1 [Note] A temporary password is generated for root@localhost: d
5. Grant read and write permission
Chown-R root. Chown-R mysql data mysql-files
6. Add to MySQL startup script to system service
Cp support-files/mysql.server / etc/init.d/mysql.server
7. Grant read and write permissions to the log directory
Mkdir / var/log/mariadbtouch / var/log/mariadb/mariadb.logchown-R mysql:mysql / var/log/mariadb
8. Modify / etc/my.cnf
Vi / etc/my.cnf
Modify the socket path under the [mysqld] group, comment out / var/lib/mysql/mysql.sock, and add a behavior tmp/mysql.soc
[mysqld] datadir=/var/lib/mysql#socket=/var/lib/mysql/mysql.socksocket=/tmp/mysql.sock
9. Start the MySQL service
Service mysql.server start
Or
/ usr/local/mysql/support-files/mysql.server start
10. Log in to MySQL
/ usr/local/mysql/bin/mysql-uroot-pEnter password:
If you don't know the password,
The password is in step 4 of installing MySQL. It is mentioned how to find the initialization temporary password.
11. Set MySQL password
After logging in successfully, set the MySQL password
Mysql > ALTER USER 'root'@'localhost' identified by' mima';mysql > flush privileges
twelve。 Enable remote login
Mysql > grant all privileges on *. * to 'root'@'%' identified by' mima' with grant option;mysql > flush privileges;mysql > exit;8, download dependency package yum-y install chkconfigyum-y install bind-utilsyum-y install psmiscyum-y install libxsltyum-y install zlibyum-y install sqliteyum-y install cyrus-sasl-plainyum-y install cyrus-sasl-gssapiyum-y install fuseyum-y install portmapyum-y install fuse-libsyum-y install redhat-lsb III, cloudera manager Server & Agent installation 1, install CM Server & Agent
On all nodes, create / opt/cloudera-manager
Mkdir / opt/cloudera-manager
Upload the downloaded cloudera-manager-centos7-cm5.12.1_x86_64.tar.gz installation package to the node1 node / opt/ directory
Copy cloudera-manager-centos7-cm5.12.1_x86_64.tar.gz from the node1 node to all Server, Agent nodes to create the / opt/cloudera-manager directory:
For an in {2... 7}; do scp / opt/cloudera-manager-*.tar.gz root@node$a:/opt/; done
All Server and Agent nodes are decompressed and installed Cloudera Manager Server & Agent
Cd / opttar xvzf cloudera-manager*.tar.gz-C / opt/cloudera-manager2, create user cloudera-scm (all nodes)
Cloudera-scm user description, extracted from the official website:
Cloudera Manager Server and managed services are configured to use the user account cloudera-scm by default, creating a user with this name is the simplest approach. This created user, is used automatically after installation is complete.
The Cloudera Manager server and managed service are configured to use the user account Cloudera-scm by default, and creating a user with that name is the easiest way. Create a user and use it automatically after installation is complete.
Execute: create cloudera-scm users on all nodes
Useradd-system-home=/opt/cloudera-manager/cm-5.12.1/run/cloudera-scm-server/-no-create-home-shell=/bin/false-comment "Cloudera SCM User" cloudera-scm3, configuration CM Agent
Modify node1 node
Hostname of the server_host primary node in / opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini.
Cd / opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/vi config.ini
After the node1 node is modified by the node1 operation (copied to all nodes)
For an in {1... 7}; do scp / opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini root@node$a:/opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini; done4, database for configuring CM Server
Initialize the database for CM5 at the primary node node1:
Download the mysql driver package
Cd / opt/cloudera-manager/cm-5.12.1/share/cmf/libwget http://maven.aliyun.com/nexus/service/local/repositories/hongkong-nexus/content/Mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
Start the MySQL service
Service mysql.server startcd / opt/cloudera-manager/cm-5.12.1/share/cmf/schema/./scm_prepare_database.sh mysql cm-h node1-uroot-pmima-scm-host node1 scm scm scm
I see the following message. Congratulations, there is nothing wrong with the configuration.
[main] DbCommandExecutor INFO Successfully connected to database.All done, your SCM database is configured correctly!
Format:
Scm_prepare_database.sh mysql cm-h-u-p-- scm-host scm scm scm corresponds to: database type, database server username, password-node where scm-host Cloudera_Manager_Server resides. 5. Create a Parcel directory
The Manager node creates a directory / opt/cloudera/parcel-repo and executes:
The files that will be downloaded
CDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcelCDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel.shamanifest.json
Copy to this directory.
Mkdir-p / opt/cloudera/parcel-repochown cloudera-scm:cloudera-scm / opt/cloudera/parcel-repocd / opt/cloudera/parcel-repo
Rename, CDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel.sha1 otherwise, the system will download CDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel again
Mv CDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel.sha1 CDH-5.12.1-1.cdh6.12.1.p0.3-el7.parcel.sha
The Agent node creates a directory / opt/cloudera/parcels and executes:
Mkdir-p / opt/cloudera/parcelschown cloudera-scm:cloudera-scm / opt/cloudera/parcels6, start the CM Manager&Agent service
Note that the mysql service is started and the firewall is off
Execute on node1 (master):
Server
/ opt/cloudera-manager/cm-5.12.1/etc/init.d/cloudera-scm-server start
Execute on node2-7 (Agents):
Agents
/ opt/cloudera-manager/cm-5.12.1/etc/init.d/cloudera-scm-agent start
If you can access http://Master:7180 (username, password: admin), the installation is successful.
It takes some time for Manager to start successfully, and it takes some time to create the corresponding table in the database.
IV. CDH5 installation
After CM Manager & & Agent starts successfully, log in to the front page for CDH installation and configuration.
The free version of CM5 has removed the limit of 50 nodes.
After each Agent node starts normally, you can see the corresponding node in the list of hosts currently managed.
Select the node to install and click to continue.
Click, continue, if the configuration of the local Parcel package is correct, then the download in the following figure should be completed in an instant, and then just wait patiently for the allocation process, about 10 minutes, depending on the speed of the private network.
(if there is a problem with the local Parcel, re-check whether steps 3 and 5 are configured correctly)
Click, continue, if the configuration of the local Parcel package is correct, then the download in the following figure should be completed in an instant, and then just wait patiently for the allocation process, about 10 minutes, depending on the speed of the private network.
Encounter problems
Question one
Next is the server check, which may encounter the following problems:
Cloudera recommends setting / proc/sys/vm/swappiness to a maximum of 10. Currently set to 30.
Use the sysctl command to change the setting at run time and edit / etc/sysctl.conf to save the setting after reboot.
You can continue with the installation, but Cloudera Manager may report that your host is in poor health due to swapping. The following hosts will be affected: node [2-7]
Echo 0 > / proc/sys/vm/swappiness
Question two
Transparent large page compression is enabled, which can cause significant performance problems. Please run
Echo never > / sys/kernel/mm/transparent_hugepage/ defragmentand echo never > / sys/kernel/mm/transparent_hugepage/enabled
To disable this setting, and then add the same command to initialization scripts such as / etc/rc.local to set it when the system restarts. The following hosts will be affected: node [2-7]
Echo never > / sys/kernel/mm/transparent_hugepage/defragecho never > / sys/kernel/mm/transparent_hugepage/enabled
Fifth, script MySql to build the library & & delete the library
1. MySql build the database & & delete the database
Amon
Create database amon DEFAULT CHARACTER SET utf8; grant all on amon.* TO 'amon'@'%' IDENTIFIED BY' amon'
Hive
Create database hive DEFAULT CHARACTER SET utf8; grant all on hive.* TO 'hive'@'%' IDENTIFIED BY' hive'
Oozie
Create database oozie DEFAULT CHARACTER SET utf8; grant all on oozie.* TO 'oozie'@'%' IDENTIFIED BY' oozie';Contact author: Penglei provenance: http://www.ymq.io/2017/09/24/Cloudera-ManagerEmail oozie'@'%' IDENTIFIED BY Souyunku.com copyright belongs to the author, reprint please indicate the source Wechat: follow the official account, search Yunku, focus on development technology research and knowledge sharing
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.