In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to deploy hadoop". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to deploy hadoop.
Hadoop deployment
Hadoop introduction:
Broad sense: biosphere based on apache hadoop software (hive zookeeper spark hbase)
Narrow sense: apache hadoop software only
Related official website:
Hadoop.apache.org
Hive.apache.org
Spark.apache.org
Cdh-hadoop: http://archive.cloudera.com/cdh6/cdh/5/hadoop-2.6.0-cdh6.7.0.tar.gz
Hadoop software and version:
1.x not used by enterprises
2.x mainstream
3.x No enterprise dares to use it
a. Mining pit
b. Many companies deploy big data environment (www.cloudera.com) by CDH5.x, that is, 2.6.0-cdh6.7.0 =? Apache hadoop2.6.0
Many companies deploy big data environment (www.cloudera.com) in CDH5.X, which is equivalent to centralizing the components of an ecosystem into a system.
As the base environment, 2.6.0-cdh6.7.0 is installed in it. Note that this version is not equal to apache hadoop2.6.0, because
Hadoop in cdh6.7.0 has made a bug upgrade.
Hadoop software:
Hdfs: storing distributed file systems
Mapreduce: calculation. Use java to calculate job1,job2, but enterprises do not use java (difficult to develop, complex code)
Yarn: resource and job scheduling (cpu memory allocation), that is, which job is assigned to which node.
-- if you need to follow ssh
Ubuntu Linux:
$sudo apt-get install ssh
$sudo apt-get install rsync
-
Installation section:
Environment: CentOS pseudo-distributed installation: that is, single node installation
HADOOP version: hadoop-2.6.0-cdh6.7.0.tar.gz
JDK version: jdk-8u45-linux-x64.gz
Installation principle: different software needs to specify the corresponding user
Linux root user
Mysql mysqladmin user
Hadoop hadoop user
1. Create hadoop users and upload hadoop software
* *
Useradd hadoop
Su-hadoop
Mkdir app
Cd app/
Upload hadoop package
The results are as follows:
[hadoop@hadoop app] $pwd
/ home/hadoop/app
[hadoop@hadoop app] $ls-l
Total 304288
Drwxr-xr-x 15 hadoop hadoop 4096 Feb 14 23:37 hadoop-2.6.0-cdh6.7.0
-rw-r--r-- 1 root root 311585484 Feb 14 17:32 hadoop-2.6.0-cdh6.7.0.tar.gz
* *
two。 To deploy jdk, use the CDH version of JDK
* *
Create a JDK directory and upload the JDK package using the CDH version of JDK
Su-root
Mkdir / usr/java # upload JDK package to this directory
Mkdir / usr/share/java # jdbc jar packages need to be placed in this directory when deploying the CDH environment, otherwise an error will be reported
Cd / usr/java
Tar-xzvf jdk-8u45-linux-x64.gz # decompress JDK
Drwxr-xr-x 8 uucp 4096 Apr 11 2015 jdk1.8.0_45 # Note that the user and group are incorrect after decompression. You need to change the user group to root:root.
Chown-R root:root jdk1.8.0_45
Drwxr-xr-x 8 root root 4096 Apr 11 2015 jdk1.8.0_45
The results are as follows:
[root@hadoop java] # pwd
/ usr/java
[root@hadoop java] # ll
Total 169216
Drwxr-xr-x 8 root root 4096 Apr 11 2015 jdk1.8.0_45
-rw-r--r-- 1 root root 173271626 Jan 26 18:35 jdk-8u45-linux-x64.gz
* *
3. Set the java environment variable
Su-root
Vi / etc/profile
Export JAVA_HOME=/usr/java/jdk1.8.0_45
Export JRE_HOME=$JAVA_HOME/jre
Export CLASSPATH=.:$JAVA_HOME/lib:$JER_HOME/lib:$CLASSPATH
Export PATH=$JAVA_HOME/bin:$JER_HOME/bin:$PATH
Source / etc/profile
[root@hadoop java] # which java
/ usr/java/jdk1.8.0_45/bin/java
**********************
4. Decompress hadoop
Su-hadoop
Cd / home/hadoop/app
[hadoop@hadoop002 app] $tar-xzvf hadoop-2.6.0-cdh6.7.0.tar.gz
[hadoop@hadoop002 app] $cd hadoop-2.6.0-cdh6.7.0
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $ll
Total 76
Drwxr-xr-x 2 hadoop hadoop 4096 Mar 24 2016 bin executable script
Drwxr-xr-x 2 hadoop hadoop 4096 Mar 24 2016 bin-mapreduce1
Drwxr-xr-x 3 hadoop hadoop 4096 Mar 24 2016 cloudera
Drwxr-xr-x 6 hadoop hadoop 4096 Mar 24 2016 etc configuration directory (conf)
Drwxr-xr-x 5 hadoop hadoop 4096 Mar 24 2016 examples
Drwxr-xr-x 3 hadoop hadoop 4096 Mar 24 2016 examples-mapreduce1
Drwxr-xr-x 2 hadoop hadoop 4096 Mar 24 2016 include
Drwxr-xr-x 3 hadoop hadoop 4096 Mar 24 2016 lib jar package directory
Drwxr-xr-x 2 hadoop hadoop 4096 Mar 24 2016 libexec
Start and stop script for the drwxr-xr-x 3 hadoop hadoop 4096 Mar 24 2016 sbin hadoop component
Drwxr-xr-x 4 hadoop hadoop 4096 Mar 24 2016 share
Drwxr-xr-x 17 hadoop hadoop 4096 Mar 24 2016 src
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
* * *
4. Extract and configure hadoop
Su-hadoop
Cd app
Tar-xzvf hadoop-2.6.0-cdh6.7.0.tar.gz
Cd / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/etc/hadoop
Vi core-site.xml
Fs.defaultFS
Hdfs://localhost:9000
Vi hdfs-site.xml
Dfs.replication
one
Configure the environment variable of hadoop, otherwise an error will be reported at startup
Vi / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/etc/hadoop/hadoop-env.sh
Export HADOOP_CONF_DIR=/home/hadoop/app/hadoop-2.6.0-cdh6.7.0/etc/hadoop
Export JAVA_HOME=/usr/java/jdk1.8.0_45
* *
* *
5. Configure ssh localhost password-free trust relationship
Su-hadoop
Ssh-keygen # keep entering the car
Cd .ssh # can see two files
Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys # generate authorized_keys trust file
Ssh localhost date
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is b1:94:33:ec:95:89:bf:06:3b:ef:30:2f:d7:8e:d2:4c.
Are you sure you want to continue connecting (yes/no)? Yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Wed Feb 13 22:41:17 CST 2019
Chmod authorized_keys # is very important. If you do not change your permissions, you will be asked to enter a password when you execute ssh localhost date, but hadoop users do not have a password at all.
* *
6. Formatting
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $bin/hdfs namenode-format
* *
Cd / home/hadoop/app/hadoop-2.6.0-cdh6.7.0
Bin/hdfs namenode-format # Why did you enter bin and then hdfs namenode-format said that the hdfs command could not be found
* *
7. Start the hadoop service
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $sbin/start-dfs.sh
19-02-13 22:47:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
Starting namenodes on [localhost]
Localhost: starting namenode, logging to / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
Localhost: starting datanode, logging to / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is b1:94:33:ec:95:89:bf:06:3b:ef:30:2f:d7:8e:d2:4c.
Are you sure you want to continue connecting (yes/no)? Yes # enter yes, because the ssh trust relationship is configured with localhost, not 0.0.0.0
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop002.out
19-02-13 22:49:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $sbin/stop-dfs.sh
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $sbin/start-dfs.sh
19-02-13 22:57:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
Starting namenodes on [localhost]
Localhost: starting namenode, logging to / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
Localhost: starting datanode, logging to / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to / home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop002.out
19-02-13 22:57:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $jps # to verify whether it is started properly, you need to start the following four services
15059 Jps
14948 SecondaryNameNode second name Node number two
14783 DataNode data Node
14655 NameNode name node boss read and write
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
Open http://ip:50070 # can be installed successfully to open the web management interface of hadoop: as shown in the figure
8. Configure hadoop command environment variables
* * *
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $cat ~ / .bash_profile
# .bash _ profile
# Get the aliases and functions
If [- f ~ / .bashrc]; then
. ~ / .bashrc
Fi
# User specific environment and startup programs
Export HADOOP_PREFIX=/home/hadoop/app/hadoop-2.6.0-cdh6.7.0
Export PATH=$HADOOP_PREFIX/bin:$PATH
Source / .bash_profile
/ home/hadoop/app/hadoop-2.6.0-cdh6.7.0
* * *
9. Manipulating hadoop, hdfs dfs operating commands and Linux commands are very similar
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $bin/hdfs dfs-ls /
19-02-13 23:08:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $bin/hdfs dfs-ls /
19-02-13 23:11:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $ls /
Bin dev home lib64 media opt root sbin srv tmp var
Boot etc lib lost+found mnt proc run selinux sys usr
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $bin/hdfs dfs-mkdir / ruozedata
19-02-13 23:11:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $bin/hdfs dfs-ls /
19-02-13 23:11:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
Found 1 items
Drwxr-xr-x-hadoop supergroup 0 2019-02-13 23:11 / ruozedata
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $ls /
Bin dev home lib64 media opt root sbin srv tmp var
Boot etc lib lost+found mnt proc run selinux sys usr
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $
10. View help
[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0] $bin/hdfs-- help
Assignment:
Reading excerpts from 1.ssh blog
Http://blog.itpub.net/30089851/viewspace-1992210/
Http://blog.itpub.net/30089851/viewspace-2127102/
two。 Deploy hdfs pseudo-distribution
3. Blogs should be written to hdfs pseudo-distributed
Tips:
If su-zookeeper cannot be switched
Solution:
Change the login mode of zookeeper users in / etc/passwd by / sbin/nologin== > / bin/bash
Thank you for your reading, the above is the content of "how to deploy hadoop". After the study of this article, I believe you have a deeper understanding of how to deploy hadoop, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.