In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "the steps of Ambari2.6 installation and deployment of Hadoop2.7". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Apache Ambari is a Web-based tool that supports provisioning, management, and monitoring of Apache Hadoop clusters. Ambari already supports most Hadoop components, including HDFS, MapReduce, Hive, Pig, Hbase, Zookeper, Sqoop, and Hcatalog. Apache Ambari supports centralized management of HDFS, MapReduce, Hive, Pig, Hbase, Zookeper, Sqoop, and Hcatalog. It is also one of the top five hadoop management tools. Ambari can install secure (Kerberos-based) Hadoop clusters to support Hadoop security, provide role-based user authentication, authorization and audit functions, and integrate LDAP and Active Directory for user management.
The reason for choosing Ambari to deploy hadoop instead of CDH is that the latest version of CDH supports only the latest version of Hadoop2.6.X,Ambari and supports Hadoop2.7.3.
1. Install and deploy the official website http://ambari.apache.org/ and brief https://www.jianshu.com/p/73f9670f71cf, which are mainly divided into the following steps:
1. Mutual trust between nodes
2. Turn off firewall and selinux
3. Install ambari-server
4. Set ambari-server
5. Deploy the components of hadoop with graphical interface
2. The steps for adding new nodes are as follows:
1. Note that the key is the master1 node prod-hadoop-master-01 / root/.ssh/d_rsa file.
2. Register the node
3. Installation service can also be added before installation
4. Configure by default
5. Start deployment after confirming that there are no changes.
6. The installation progress can be completed, or you can log on to the home page and wait for the subsequent installation to be completed.
3. Supplementary Ambari does not have integrated components installed:
1. Solve the default installation data directory of ambari-service and ambari-agent under /
Ambari-agent stop
Mv / var/lib/ambari-agent / data/disk1/
Ln-s / data/disk1/ambari-agent / var/lib/ambari-agent
Mv / usr/hdp / data/disk1/
Ln-s / data/disk1/hdp/ / usr/hdp
Ambari-agent start
2. Integration of ambari and presto
Referenc
Https://www.jianshu.com/p/0b5f52a959d5
Https://github.com/prestodb/ambari-presto-service/releases
Https://github.com/prestodb/ambari-presto-service/releases/download/v1.2/ambari-presto-1.2.tar.gz
[root@prod-hadoop-master-01] # tar zxvf ambari-presto-1.2.tar.gz-C / var/lib/ambari-server/resources/stacks/HDP/2.6/services/
Ambari-presto-1.2/
Ambari-presto-1.2/configuration/
Ambari-presto-1.2/configuration/connectors.properties.xml
Ambari-presto-1.2/configuration/jvm.config.xml
Ambari-presto-1.2/configuration/config.properties.xml
Ambari-presto-1.2/configuration/node.properties.xml
Ambari-presto-1.2/HISTORY.rst
Ambari-presto-1.2/themes/
Ambari-presto-1.2/themes/theme.json
Ambari-presto-1.2/Makefile
Ambari-presto-1.2/setup.py
Ambari-presto-1.2/MANIFEST.in
Ambari-presto-1.2/PKG-INFO
Ambari-presto-1.2/package/
Ambari-presto-1.2/package/scripts/
Ambari-presto-1.2/package/scripts/presto_cli.py
Ambari-presto-1.2/package/scripts/presto_worker.py
Ambari-presto-1.2/package/scripts/presto_coordinator.py
Ambari-presto-1.2/package/scripts/init.py
Ambari-presto-1.2/package/scripts/params.py
Ambari-presto-1.2/package/scripts/download.ini
Ambari-presto-1.2/package/scripts/common.py
Ambari-presto-1.2/package/scripts/presto_client.py
Ambari-presto-1.2/setup.cfg
Ambari-presto-1.2/ambari_presto.egg-info/
Ambari-presto-1.2/ambari_presto.egg-info/dependency_links.txt
Ambari-presto-1.2/ambari_presto.egg-info/not-zip-safe
Ambari-presto-1.2/ambari_presto.egg-info/PKG-INFO
Ambari-presto-1.2/ambari_presto.egg-info/top_level.txt
Ambari-presto-1.2/ambari_presto.egg-info/SOURCES.txt
Ambari-presto-1.2/LICENSE
Ambari-presto-1.2/README.md
Ambari-presto-1.2/metainfo.xml
Ambari-presto-1.2/requirements.txt
[root@prod-hadoop-master-01 ~] # cd / var/lib/ambari-server/resources/stacks/HDP/2.6/services/
[root@prod-hadoop-master-01 services] # ls
ACCUMULO ATLAS FALCON HBASE HIVE KERBEROS MAHOUT PIG RANGER_KMS SPARK SQOOP stack_advisor.pyc STORM TEZ ZEPPELIN
Ambari-presto-1.2 DRUID FLUME HDFS KAFKA KNOX OOZIE RANGER SLIDER SPARK2 stack_advisor.py stack_advisor.pyo SUPERSET YARN ZOOKEEPER
[root@prod-hadoop-master-01 services] # mv ambari-presto-1.2/ PRESTO
[root@prod-hadoop-master-01 services] # chmod-R + x PRESTO/*
[root@prod-hadoop-master-01 services] # ambari-server restart
Add presto server, one control node, two worker nodes on the platform
3. Install kylin components
Https://blog.csdn.net/vivismilecs/article/details/72763665
Download and install
Tar-zxvf apache-kylin-2.3.1-hbase1x-bin.tar.gz-C / hadoop/
Cd / hadoop/
Chown-R hdfs:hadoop kylin/
Vim / etc/profile
Source / etc/profile
Echo $KYLIN_HOME
/ hadoop/kylin
Switch users to check whether the environment is installed correctly
Su hdfs
Hive (enter hive,quit; and exit)
Hbase shell (end of entry into hbase shell,ctrl+c)
[hdfs@prod-hadoop-data-01 kylin] $bin/check-env.sh
Retrieving hadoop conf dir...
KYLIN_HOME is set to / hadoop/kylin
Hdfs is not in the sudoers file. This incident will be reported.
Failed to create hdfs:///kylin/spark-history. Please make sure the user has right to access hdfs:///kylin/spark-history
Troubleshooting
[hdfs@prod-hadoop-data-01 kylin] $exit
[root@prod-hadoop-data-01 hadoop] # vim / etc/sudoers.d/waagent
Detection
[hdfs@prod-hadoop-data-01 kylin] $bin/check-env.sh
Retrieving hadoop conf dir...
KYLIN_HOME is set to / hadoop/kylin
Start
[hdfs@prod-hadoop-data-01 kylin] $bin/kylin.sh start
Retrieving hadoop conf dir...
KYLIN_HOME is set to / hadoop/kylin
Retrieving hive dependency...
Retrieving hbase dependency...
Retrieving hadoop conf dir...
Retrieving kafka dependency...
Retrieving Spark dependency...
Start to check whether we need to migrate acl tables
Retrieving hadoop conf dir...
KYLIN_HOME is set to / hadoop/kylin
Retrieving hive dependency...
Retrieving hbase dependency...
Retrieving hadoop conf dir...
Retrieving kafka dependency...
Retrieving Spark dependency...
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jarVera fileveGrane HadoopUnique apachemurkylinMel 2.3.1 Mutual Binder.class]
SLF4J: Found binding in [jar:file:/data/disk1/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jarring]
SLF4J: Found binding in [jar:file:/hadoop/apache-kylin-2.3.1-bin/spark/jars/slf4j-log4j12-1.7.16.jarbank]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018-05-24 14 common.KylinConfig:319 23 common.KylinConfig:319 21974 INFO [main] common.KylinConfig:319: Hadoopamp Hadoophe Kylinlyk 2.3.1 QUBINGUBINGUBINGUBINGUBINGUBING KYLINKYLMUR 2.3.1.
2018-05-24 14 common.KylinConfig:278 22016 DEBUG [main] common.KylinConfig:278: KYLIN_CONF property was not set, will seek KYLIN_HOME env variable
2018-05-24 14 common.KylinConfig:99 23 21 09 INFO [main] common.KylinConfig:99: Initialized a new KylinConfig from getInstanceFromEnv: 494317290
2018-05-24 14 persistence.ResourceStore:86 2322120 INFO [main] Using metadata url kylin_metadata@hbase for resource store
2018-05-24 14 hbase.HBaseConnection:181 2315 24034 DEBUG [main] hbase.HBaseConnection:181: Using the working dir FS for HBase: hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14 hbase.HBaseConnection:258 2314 24 INFO [main] hbase.HBaseConnection:258: connection is null or closed, creating a new one
2018-05-24 14 zookeeper.RecoverableZooKeeper:120 2328 INFO [main] zookeeper.RecoverableZooKeeper:120: Process identifier=hconnection-0x7561db12 connecting to ZooKeeper ensemble=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181
2018-05-24 14 zookeeper.ZooKeeper:100 23176 INFO [main] zookeeper.ZooKeeper:100: Client environment:zookeeper.version=3.4.6-292 zookeeper.ZooKeeper:100 1, built on 05Mather 11 Greater, 2018 07:09 GMT
2018-05-24 14 zookeeper.ZooKeeper:100 2317 INFO [main] zookeeper.ZooKeeper:100: Client environment:host.name=prod-hadoop-data-01.hadoop
2018-05-24 14 zookeeper.ZooKeeper:100 2317 INFO [main] zookeeper.ZooKeeper:100: Client environment:java.version=1.8.0_91
2018-05-24 14 zookeeper.ZooKeeper:100 2315 INFO [main] zookeeper.ZooKeeper:100: Client environment:java.vendor=Oracle Corporation
2018-05-24 14 zookeeper.ZooKeeper:100 2315 INFO [main] zookeeper.ZooKeeper:100: Client environment:java.home=/usr/local/java
2018-05-24 14 zookeeper.ZooKeeper:100 2318 INFO [main] zookeeper.ZooKeeper:100: Client environment:java.class.path=/hadoop/kylin/tool/kylin-tool-2.3.1.jar:1.8.1.jar:/hadoop/kylin/spark/jars/hadoop-mapreduce-client-jobclient-2.7.3.jar:/hadoop/kylin/spark/jars/chill-java-0.8.0.jar:jar:/hadoop/kylin/spark/jars/xercesImpl-2.9.1. Jar:/hadoop/kylin/spark/jars/netty-3.8.0.Final.jar:/usr/hdp/current/ext/hbase/*
2018-05-24 14 zookeeper.ZooKeeper:100 2319 INFO [main] zookeeper.ZooKeeper:100: Client environment:java.library.path=:/usr/hdp/2.6.5.0-292/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.5.0-292/hadoop/lib/native/Linux-amd64-64:/data/disk1/hdp/2.6.5.0-292/hadoop/lib/native
2018-05-24 14 zookeeper.ZooKeeper:100 2319 INFO [main] zookeeper.ZooKeeper:100: Client environment:java.io.tmpdir=/tmp
2018-05-24 14 zookeeper.ZooKeeper:100 2319 INFO [main] zookeeper.ZooKeeper:100: Client environment:java.compiler=
2018-05-24 14 zookeeper.ZooKeeper:100 2315 24193 INFO [main] zookeeper.ZooKeeper:100: Client environment:os.name=Linux
2018-05-24 14 zookeeper.ZooKeeper:100 2315 24193 INFO [main] zookeeper.ZooKeeper:100: Client environment:os.arch=amd64
2018-05-24 14 zookeeper.ZooKeeper:100 2315 24193 INFO [main] zookeeper.ZooKeeper:100: Client environment:os.version=2.6.32-696.18.7.el6.x86_64
2018-05-24 14 zookeeper.ZooKeeper:100 2315 24193 INFO [main] zookeeper.ZooKeeper:100: Client environment:user.name=hdfs
2018-05-24 14 zookeeper.ZooKeeper:100 2315 24194 INFO [main] zookeeper.ZooKeeper:100: Client environment:user.home=/home/hdfs
2018-05-24 14 zookeeper.ZooKeeper:100 2315 24194 INFO [main] zookeeper.ZooKeeper:100: Client environment:user.dir=/hadoop/apache-kylin-2.3.1-bin
2018-05-24 14 zookeeper.ZooKeeper:438 2315 INFO [main] zookeeper.ZooKeeper:438: Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@66b72664
2018-05-24 14 prod-hadoop-data-01.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1019: Opening socket connection to server prod-hadoop-data-01.hadoop/172.20.3.6:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14 prod-hadoop-data-01.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:864: Socket connection established, initiating session, client: / 172.20.3.6 INFO 50746, server: prod-hadoop-data-01.hadoop/172.20.3.6:2181
2018-05-24 14 prod-hadoop-data-01.hadoop:2181 2315 24256 INFO [main-SendThread (prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1279: Session establishment complete on server prod-hadoop-data-01.hadoop/172.20.3.6:2181, sessionid = 0x163882326e1003b, negotiated timeout = 60000
2018-05-24 14 hbase.HBaseConnection:181 2312 DEBUG [main] hbase.HBaseConnection:181: Using the working dir FS for HBase: hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14 imps.CuratorFrameworkImpl:224 2314 24944 INFO [main] imps.CuratorFrameworkImpl:224: Starting
2018-05-24 14 zookeeper.ZooKeeper:438 2324 INFO [main] zookeeper.ZooKeeper:438: Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181 sessionTimeout=120000 watcher=org.apache.curator.ConnectionState@67207d8a
2018-05-24 14 prod-hadoop-master-02.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:1019: Opening socket connection to server prod-hadoop-master-02.hadoop/172.20.3.5:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14 prod-hadoop-master-02.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:864: Socket connection established, initiating session, client: / 172.20.3.6 INFO 60080, server: prod-hadoop-master-02.hadoop/172.20.3.5:2181
2018-05-24 14 util.ZookeeperDistributedLock:143 2315 DEBUG [main] util.ZookeeperDistributedLock:143: 6616@prod-hadoop-data-01 trying to lock / kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14 prod-hadoop-master-02.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:1279: Session establishment complete on server prod-hadoop-master-02.hadoop/172.20.3.5:2181, sessionid = 0x3638801b4480045, negotiated timeout = 60000
2018-05-24 14 state.ConnectionStateManager:228 2312 INFO [main-EventThread] state.ConnectionStateManager:228: State change: CONNECTED
2018-05-24 14 util.ZookeeperDistributedLock:155 2325 031 INFO [main] util.ZookeeperDistributedLock:155: 6616@prod-hadoop-data-01 acquired lock at / kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14 hbase.HBaseConnection:337 2325036 DEBUG [main] hbase.HBaseConnection:337: Creating HTable 'kylin_metadata'
2018-05-24 14 client.HBaseAdmin:789 23 22 INFO [main] client.HBaseAdmin:789: Created kylin_metadata
2018-05-24 14 hbase.HBaseConnection:350 23 22 DEBUG [main] hbase.HBaseConnection:350: HTable 'kylin_metadata' created
2018-05-24 14 util.ZookeeperDistributedLock:223 2324 DEBUG [main] util.ZookeeperDistributedLock:223: 6616@prod-hadoop-data-01 trying to unlock / kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14 util.ZookeeperDistributedLock:234 2315 INFO [main] util.ZookeeperDistributedLock:234: 6616@prod-hadoop-data-01 released lock at / kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14 hbase.HBaseConnection:181 2315 DEBUG [main] hbase.HBaseConnection:181: Using the working dir FS for HBase: hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14 hbase.HBaseConnection:258 2315 INFO [main] hbase.HBaseConnection:258: connection is null or closed, creating a new one
2018-05-24 14 zookeeper.RecoverableZooKeeper:120 2310 INFO [main] zookeeper.RecoverableZooKeeper:120: Process identifier=hconnection-0xf339eae connecting to ZooKeeper ensemble=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181
2018-05-24 14 zookeeper.ZooKeeper:438 2318 INFO [main] zookeeper.ZooKeeper:438: Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@2822c6ff
2018-05-24 14 prod-hadoop-data-01.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1019: Opening socket connection to server prod-hadoop-data-01.hadoop/172.20.3.6:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14 prod-hadoop-data-01.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:864: Socket connection established, initiating session, client: / 172.20.3.6 INFO 50760, server: prod-hadoop-data-01.hadoop/172.20.3.6:2181
2018-05-24 14 prod-hadoop-data-01.hadoop:2181 2315 INFO [main-SendThread (prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1279: Session establishment complete on server prod-hadoop-data-01.hadoop/172.20.3.6:2181, sessionid = 0x163882326e1003c, negotiated timeout = 60000
2018-05-24 14 hbase.HBaseConnection:137 23 13 8 INFO [close-hbase-conn] hbase.HBaseConnection:137: Closing HBase connections...
2018-05-24 14 client.ConnectionManager$HConnectionImplementation:1703 2314 INFO [close-hbase-conn] Closing zookeeper sessionid=0x163882326e1003c
2018-05-24 14 zookeeper.ZooKeeper:684 2315 INFO [close-hbase-conn] zookeeper.ZooKeeper:684: Session: 0x163882326e1003c closed
2018-05-24 14 zookeeper.ClientCnxn:524 2315 INFO [main-EventThread] EventThread shut down
2018-05-24 14 zookeeper.ZooKeeper:684 2315 INFO [Thread-8] zookeeper.ZooKeeper:684: Session: 0x3638801b4480045 closed
2018-05-24 14 zookeeper.ClientCnxn:524 2315 INFO [main-EventThread] EventThread shut down
2018-05-24 14 client.ConnectionManager$HConnectionImplementation:2167 2312 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:2167: Closing master protocol: MasterService
2018-05-24 14 client.ConnectionManager$HConnectionImplementation:1703 2315 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1703: Closing zookeeper sessionid=0x163882326e1003b
2018-05-24 14 zookeeper.ClientCnxn:524 2318 INFO [main-EventThread] zookeeper.ClientCnxn:524: EventThread shut down
2018-05-24 14 zookeeper.ZooKeeper:684 2315 INFO [close-hbase-conn] zookeeper.ZooKeeper:684: Session: 0x163882326e1003b closed
A new Kylin instance is started by hdfs. To stop it, run 'kylin.sh stop'
Check the log at / hadoop/kylin/logs/kylin.log
Web UI is at http://:7070/kylin
This is the end of the "steps for Ambari2.6 installation and deployment of Hadoop2.7". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.