In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Pre-environmental preparation:
Three machines: 192.168.1.25 (1-3); 251Muambarified mastership; 252Muambarified slare1; 253Muambariatric slare2
Machine configuration, system environment
Using a virtual machine, system rhel6.6; centos6.5
All machines need to be set up (I have set it up, you can take a look at it):
1. Open maximum number of files setting
[root@ambari_slare2 ~] # ulimit-Hn
10000
[root@ambari_slare2 ~] # ulimit-Sn
10000
Add: [
1. Modify / etc/security/limits.conf
Modify its content through vi / etc/security/limits.conf and add it at the end of the file (values can also be defined by yourself):
* soft nofile = 32768
* hard nofile = 65536
two。 Modify / etc/profile
Through vi / etc/profile modification, add the following at the end
Ulimit-n 32768
Then log in again and take effect.
Description:
In fact, only changing / etc/profile will take effect, but I still suggest changing / etc/security/limits.conf as well.
Finally, if you want to make the changes take effect for all users, it seems that you will have to recompile the kernel of Linux. ]
two。 Modify the hosts file
[root@ambari_slare2 ~] # cat / etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.251 ambari_master
192.168.1.252 ambari_slare1
192.168.1.253 ambari_slare2
3. Set selinux to disabled
[root@ambari_slare2 ~] # getenforce
Disabled
4. Turn off the firewall
[root@ambari_slare2 ~] # / etc/init.d/iptables status
Iptables: Firewall is not running.
5. Turn on ntpd
[root@ambari_slare2 ~] # / etc/init.d/ntpd status
Ntpd (pid 5858) is running...
6. Set the environment variable jdk
[root@ambari_slare2 ~] # java-version
Java version "1.7.079"
Java (TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot (TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
7. Configure ssh
[root@ambari_slare2 ~] # cat .ssh / authorized_keys
Ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnxCVyAwzM743vB6KB4EVLDZ0+ydsmEtuMHD0ATar8zWqPDuBvGc4un5Fv1mIBCgOt3+GyWbDznACNlDzLkwRkxU8XhhTsRFHaWb9t9rH0N9dDEWbLqE1D70MY+oN7ZMVwDSooUESRx05Eg8szoDPY+JXHF8AWgigNUhesJkMVpshI+GNV/x3a9F2aRvTyk5QibMVcmNGYXdrIxzhX8VDWAWI1soy3vAorteHORzOdzWuPZm78MUYwTA1p7z1h7q4gfG3GIEThkCss72LE1m7mIwgTNeAlxWYXckxzhQC13pS1D2dWKLucQHhVqfU0QW5mPE0f/++oyx9trFr72Aaaw== root@ambari_slare2
Ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAki9g5Sy4WWXkDcz78bU5oq5flWk5JL9UeL62XC0qoapD63SjFep9aRWBSabxWHaENo7G/6ES8CI2kjTIwB1syMC8ropDAx+WbkSoLPwrwqapnK49OtQ0hnTs6QMAHey3ilzWfZxKmnk0yKavFqhbfPaBYps8ewXeGdPFsRaPIJzbInwXMw4/sB1hguA0rR55fs3vJR6Px1RGSt6fq/pxX7Wmug+3JShJras8ucs3F+C491f49dNwhQBdgjHCEFabhXeSZG3ngMBX8sOMhuN19Xg/oIaa2IKX4ckIu/LbwNw5lIc+6l9kVn0Y0BLeHux6gLaQM0EfwMbsvdOK/tGZrQ== root@ambari_slare1
Ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAtZacjGL5llcaJLQZtCKDzqg/CQPjKFRJDmo+bIFBfrlD5jAR78fXnlhxdqg8dvSVnSyU7q73bYrV7U+ym4lqq3J7JMRGuNSBBndnwu90I175w8V4IntPS9tv/oLo9zzsPrnKYmsxXguUahEOJJErImIQ4LPJ3oBUDISxfIjEckjlvkUNThUmOMxSHVwyvpwFBzDWBcYsYJtZJbZYOdNSQyOb3AFOgwkgR+sPj+C+Kdp6yP/Ua3r/yZGGgUR+NFLM8x7Oz236cmJVy0xVFrE3BxYIJDp+VBeWb8bTdI40XCPmRvb0wpLRFy9nj7i1EMzZ7xTvCTZo48oLBsxH/obd+w== root@ambari_master
8. Yum-y groupinstall "Development tools"
9. Yum-y install mysql*
10. Yum-y install ruby* redhat-lsb* snappy*
Create local sources for ambari and hadoop:
1. Create a local source directory
Select a server, install the http service, and automatically create the / var/www/html directory after installation (I chose 251ambari_master)
Start the httpd command: service httpd start | | / etc/init.d/httpd start
Chkconfig httpd on
Create two directories:
Mkdir / var/www/html/ambari
Mkdir / var/www/html/hdp
two。 Install createrepo (here you need to force installation, add:-- force-- nodeps)
2.1 [root@ambari_master Packages] # rpm-ivh createrepo-0.9.9-22.el6.noarch.rpm-- force-- nodeps
Warning: createrepo-0.9.9-22.el6.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... # [100%]
1:createrepo # # [100%]
2.2 [root@ambari_master hdp] # yum install python*
(there are too many output results, so I won't paste it here. Try it yourself.)
3.ambari local source creation
Extract HDP-UTILS-1.1.0.20-centos6.tar.gz to directory / var/www/html/ambari
Command: tar zxvf HDP-UTILS-1.1.0.20-centos6.tar.tar-C / var/www/html/ambari
Copy Updates-ambari-2.2.1.0 directory to directory / var/www/html/ambary
Cd / var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6
Mkdir Updates-ambari-2.2.1.0
Cp-r / var/www/html/ambari/Updates-ambari-2.2.1.0/ambari / var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0
Enter the / var/www/html/ambari directory and execute the command: createrepo. /
Ambari local source production completed
Results:
[root@ambari_master ambari] # createrepo. /
Spawning worker 0 with 50 pkgs
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
[root@ambari_master ambari] # ls
HDP-UTILS-1.1.0.20 repodata Updates-ambari-2.2.1.0
4. Hadoop local source creation
Copy HDP-2.4.0 directory to directory / var/www/html/hdp
Enter the / var/www/html/hdp directory and execute the command: createrepo. /
Hadoop local source production completed
Results:
[root@ambari_master ambari] # cd.. / hdp/
[root@ambari_master hdp] # createrepo. /
Spawning worker 0 with 179 pkgs
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
[root@ambari_master hdp] # ls
HDP-2.4.0 repodata
5. System CD local source creation
[root@ambari_master ambari] # mount-o loop / mnt/iso/rhel-server-6.6-x86_64-dvd.iso / mnt/cdrom/
[root@ambari_master ambari] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root
50G 11G 36G 24% /
Tmpfs 2.0G 72K 2.0G 1% / dev/shm
/ dev/sda1 477M 33M 419m 8% / boot
/ dev/mapper/VolGroup-lv_home
45g 52m 43G 1% / home
/ mnt/iso/rhel-server-6.6-x86_64-dvd.iso
3.6g 3.6g 0100% / mnt/cdrom
[root@ambari_master ambari] # ls / mnt/iso/
Rhel-server-6.6-x86_64-dvd.iso
[root@ambari_master ambari] # ls / mnt/cdrom/
EFI EULA_ja isolinux ResilientStorage
EULA EULA_ko LoadBalancer RPM-GPG-KEY-redhat-beta
EULA_de EULA_pt media.repo RPM-GPG-KEY-redhat-release
EULA_en EULA_zh Packages ScalableFileSystem
EULA_es GPL README Server
EULA_fr HighAvailability release-notes TRANS.TBL
EULA_it p_w_picpaths repodata
[root@ambari_master ambari] # cat / etc/yum.repos.d/myself.repo
[daxiong]
Name=daxiong
Baseurl= file:///mnt/cdrom
Enabled=1
Gpgcheck=1
Gpgkey= file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
Inspection:
[root@ambari_master yum.repos.d] # yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
HDP-2.4 | 2.9 kB 00:00
HDP-2.4/primary_db | 61 kB 00:00
HDP-UTILS-1.1.0.20 | 2.9 kB 00:00
HDP-UTILS-1.1.0.20/primary_db | 31 kB 00:00
Ambari-2.2.1 | 2.9 kB 00:00
Ambari-2.2.1/primary_db | 31 kB 00:00
Daxiong | 4.1 kB 00:00.
Daxiong/primary_db | 3.1 MB 00:00.
Repo id repo name status
HDP-2.4 HDP-2.4 179
HDP-UTILS-1.1.0.20 Hortonworks Data Platform Utils Version-HDP-UTILS-1.1.0.20 50
Ambari-2.2.1 Ambari 2.2.1 50
Daxiong daxiong 3785
Repolist: 4064
6. Configuration of mysql
CREATE USER 'ambari'@'%' IDENTIFIED BY' ambari'
GRANT ALL PRIVILEGES ON *. * TO 'ambari'@'%'
CREATE USER 'ambari'@'localhost' IDENTIFIED BY' ambari'
GRANT ALL PRIVILEGES ON *. * TO 'ambari'@'localhost'
CREATE USER 'ambari'@'ambari_master' IDENTIFIED BY' ambari_master'
GRANT ALL PRIVILEGES ON *. * TO 'ambari'@'ambari_master'
FLUSH PRIVILEGES
CREATE DATABASE hive
FLUSH PRIVILEGES
CREATE DATABASE ambari
FLUSH PRIVILEGES
CREATE USER 'hive'@'localhost' IDENTIFIED BY' hive'
GRANT ALL PRIVILEGES ON *. * TO 'hive'@'localhost'
CREATE USER 'hive'@'%' IDENTIFIED BY' hive'
GRANT ALL PRIVILEGES ON *. * TO 'hive'@'%'
CREATE USER 'hive'@'ambari_slave2'IDENTIFIED BY' hive'
GRANT ALL PRIVILEGES ON *. * TO 'hive'@'ambari_slave2'
FLUSH PRIVILEGES
CREATE USER 'rangeradmin'@'%' IDENTIFIED BY' rangeradmin'
GRANT ALL PRIVILEGES ON *. * TO 'rangeradmin'@'%' with grant option
CREATE USER 'rangeradmin'@'localhost' IDENTIFIED BY' rangeradmin'
GRANT ALL PRIVILEGES ON *. * TO 'rangeradmin'@'localhost' with grant option
CREATE USER 'rangeradmin'@'ambari_master' IDENTIFIED BY' rangeradmin'
GRANT ALL PRIVILEGES ON *. * TO 'rangeradmin'@'ambari_master' with grant option
FLUSH PRIVILEGES
CREATE USER 'root'@'%' IDENTIFIED BY' root'
GRANT ALL PRIVILEGES ON *. * TO 'root'@'%'
FLUSH PRIVILEGES
7. Check the yum source (check, you can omit this step, I put it here to make it easy to install later)
[root@slare2 ~] # ls / etc/yum.repos.d/
Ambari.repo HDP.repo HDP-UTILS.repo mnt.repo redhat.repo rhel-source.repo
[root@slare2 ~] # cat / etc/yum.repos.d/ambari.repo
[ambari-2.2.1.]
Name=Ambari 2.2.1
Baseurl= http://192.168.1.253/ambari/
Gpgcheck=0
Enabled=1
[HDP-UTILS-1.1.0.20]
Name=Hortonworks Data Platform Utils Version-HDP-UTILS-1.1.0.20
Baseurl= http://192.168.1.253/ambari/
Gpgcheck=0
Enabled=1
[root@slare2 ~] # cat / etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
Name=HDP-UTILS-1.1.0.20
Baseurl= http://192.168.1.253/ambari
Path=/
Enabled=1
Gpgcheck=0
[root@slare2 ~] # cat / etc/yum.repos.d/HDP.repo
[HDP-2.4]
Name=HDP-2.4
Baseurl= http://192.168.1.253/hdp
Path=/
Enabled=1
Gpgcheck=0 [root@slare2 ~] #
8. Configure and install ambari
Primary node:
Yum-y install ambari-server (this step automatically installs the postgresql database, I'm using mysql, so stop it)
Service postgresql stop
Chkconfig postgresql off
Scp / var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql root@ambari_slave2:/root
Mysql database node:
Use ambari
Source / root/Ambari-DDL-MySQL-CREATE.sql
FLUSH PRIVILEGES
Ambari-server setup
Ambari-server start
JAVA_HOME=/usr/local/jdk1.7.0_79
Cd / usr/share/java
Rm-rf mysql-connector-java.jar
Ln-s mysql-connector-java-5.1.17.jar mysql-connector-java.jar
Ambari-server setup-jdbc-db=mysql-jdbc-driver=/usr/share/java/mysql-connector-java.jar
Cp / usr/share/java/mysql-connector-java-5.1.17.jar / var/lib/ambari-server/resources/mysql-jdbc-driver.jar
Vim / var/lib/ambari-server/resources/stacks/HDP/2.4/repos/repoinfo.xml
Http://192.168.1.253/hdp
HDP-2.4
HDP
Http://192.168.1.253/ambari
HDP-UTILS-1.1.0.20
HDP-UTILS
Then you can go to the web interface to install the hadoop cluster and the required components. There will be problems in the web installation process. I will not elaborate on them here. In fact, there are prompts or answers in the terminal or log. Some questions that will not be found can be found directly on the Internet.
Echo never > / sys/kernel/mm/redhat_transparent_hugepage/defrag
Echo never > / sys/kernel/mm/redhat_transparent_hugepage/enabled
Echo never > / sys/kernel/mm/transparent_hugepage/enabled
Echo never > / sys/kernel/mm/transparent_hugepage/defrag
In the web interface, you will go to the private key of the primary node to identify the machine of the child node:
Gpgcheck= 0 [root @ ambari_master ~] # cat .ssh / id_rsa
I will add later: if you have any questions, please contact me directly, QQ:1591345922.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 222
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.