In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Although the hadoop version has come to the mainstream 2: 00 era, but for learning big data, I still choose to start from the previous old version 0.20.2.
Here is the process of building a pseudo-distributed environment.
Download address of hadoop:
Http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz
Linux system version: centos7
1. Configure hostname
[root@localhost ~] # vi / etc/sysconfig/network
# Created by anaconda
Master1
[root@localhost ~] # hostname master1
2. Create groups and users to manage hadoop
[root@master1 ~] # groupadd hduser
[root@master1 ~] # useradd-g hduser hduser
[root@master1 ~] # passwd hduser
3. Ip resolution of hosts hostname
[root@master1 ~] # vi / etc/hosts
192.168.11.131 master1
4. Configure sudoers permissions for hadoop
[root@master1 ~] # vi / etc/sudoers
Hduser ALL= (ALL) NOPASSWD:ALL
5. Turn off selinux and firewall
[root@master1 ~] # vi / etc/sysconfig/selinux
SELINUX=enforcing-- > SELINUX=disabled
[root@master1 ~] # systemctl stop firewalld
[root@master1 ~] # systemctl disable firewalld
6. Decompress the package
[root@master1 ~] # su hduser
[hduser@master1 root] $cd
[hduser@master1 ~] $ll * tar*
-rw-r--r--. 1 root root 44575568 Jun 16 17:24 hadoop-0.20.2.tar.gz
-rw-r--r--. 1 root root 288430080 Mar 16 2016 jdk1.7.0_79.tar
[hduser@master1 ~] $tar xf jdk1.7.0_79.tar
[hduser@master1 ~] $tar zxf hadoop-0.20.2.tar.gz
[hduser@master1 ~] $mv jdk1.7.0_79 jdk
[hduser@master1 ~] $mv hadoop-0.20.2 hadoop
7. Configure the java environment
[hduser@master1 ~] $vi .bashrc
Export JAVA_HOME=/home/hduser/jdk
Export JRE_HOME=$JAVA_HOME/jre
Export PATH=$PATH:$JAVA_HOME/bin
Export CLASSPATH=./:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
[hduser@master1 ~] $source .bashrc
[hduser@master1 ~] $java-version
Java version "1.7.079"
Java (TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot (TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
8. Configure hadoop
[hduser@master1 conf] $pwd
/ home/hduser/hadoop/conf
[hduser@master1 conf] $vi hadoop-env.sh
Export JAVA_HOME=/home/hduser/jdk
[hduser@master1 conf] $vi core-site.xml
Fs.default.name
Hdfs://master1:9000
[hduser@master1 conf] $sudo mkdir-p/data / hadoop/data
[hduser@master1 conf] $sudo chown-R hduser:hduser / data/hadoop/data
[hduser@master1 conf] $vi hdfs-site.xml
Dfs.data.dir
/ data/hadoop/data
Dfs.replication
one
[hduser@master1 conf] $vi mapred-site.xml
Mapred.job.tracker
Master1:9001
9. Do password-free authentication
[hduser@master1 conf] $cd
[hduser@master1] $ssh-keygen-t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/ home/hduser/.ssh/id_rsa):
Created directory'/ home/hduser/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in / home/hduser/.ssh/id_rsa.
Your public key has been saved in / home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qRJhPSF32QDs9tU3e0/mAx/EBC2MHamGv2WPvUw19/M hduser@master1
The key's randomart image is:
+-[RSA 2048]-+
| |.. + .o+ + oo = | |
| + .o. .. = o |
| | o.o. + |
|. .o. O.o. Oo |
|. .S.o.. + o |
|. .. . +.. O |
|. . + * B+ |
|. . .o = = |
| | oE |
+-[SHA256]-+
One enter key
[hduser@master1 ~] $cd .ssh
[hduser@master1 .ssh] $ls
Id_rsa id_rsa.pub
[hduser@master1 .ssh] $cp id_rsa.pub authorized_keys
10. Format the file system
[hduser@master1 .ssh] $cd
[hduser@master1 ~] $cd hadoop/bin
[hduser@master1 bin] $. / hadoop namenode-format
18-06-19 04:02:12 INFO namenode.NameNode: STARTUP_MSG:
/ *
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master1/192.168.11.131
STARTUP_MSG: args = [- format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
* * /
18-06-19 04:02:13 INFO namenode.FSNamesystem: fsOwner=hduser,hduser
18-06-19 04:02:13 INFO namenode.FSNamesystem: supergroup=supergroup
18-06-19 04:02:13 INFO namenode.FSNamesystem: isPermissionEnabled=true
18-06-19 04:02:13 INFO common.Storage: Image file of size 96 saved in 0 seconds.
18-06-19 04:02:13 INFO common.Storage: Storage directory / tmp/hadoop-hduser/dfs/name has been successfully formatted.
18-06-19 04:02:13 INFO namenode.NameNode: SHUTDOWN_MSG:
/ *
SHUTDOWN_MSG: Shutting down NameNode at master1/192.168.11.131
* * /
11. Start the service
[hduser@master1 bin] $. / start-all.sh
Starting namenode, logging to / home/hduser/hadoop/bin/../logs/hadoop-hduser-namenode-master1.out
The authenticity of host 'localhost (:: 1)' can't be established.
ECDSA key fingerprint is SHA256:OXYl4X6F6g4TV7YriZaSvuBIFM840h/qTg8/B7BUil0.
ECDSA key fingerprint is MD5:b6:b6:04:2d:49:70:8b:ed:65:00:e2:05:b0:95:5b:6d.
Are you sure you want to continue connecting (yes/no)? Yes
Localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Localhost: starting datanode, logging to / home/hduser/hadoop/bin/../logs/hadoop-hduser-datanode-master1.out
Localhost: starting secondarynamenode, logging to / home/hduser/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-master1.out
Starting jobtracker, logging to / home/hduser/hadoop/bin/../logs/hadoop-hduser-jobtracker-master1.out
Localhost: starting tasktracker, logging to / home/hduser/hadoop/bin/../logs/hadoop-hduser-tasktracker-master1.out
12. View the service
[hduser@master1 bin] $jps
1867 JobTracker
1804 SecondaryNameNode
1597 NameNode
1971 TaskTracker
2011 Jps
1710 DataNode
[hduser@master1 bin] $
13. Browser to view service status
Use web to view HSFS running status
Enter in the browser
Http://192.168.11.131:50030
Use web to view MapReduce running status
Enter in the browser
Http://192.168.11.131:50070
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.