In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to build hadoop pseudo-cluster, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
Preparatory work:
1. Notebook 4G memory, operating system WIN7
2. Tools: VMware Workstation
3. Virtual machines: 3 CentOS6.5 (64-bit), one master, two slave
Install the CentOS system of a host master, 1. System environment settings (configure master node first)
1.1 modify hostname
Vim / etc/sysconfig/network
NETWORKING=yes HOSTNAME=master NTPSERVERARGS=iburst
1.2 modify the mapping relationship between hostname and IP (hosts)
Vim / etc/hosts
Add: 192.168.111.131 master
1.3 turn off the firewall
Service iptables status / / View the status of the firewall service iptables stop / / close the firewall chkconfig iptables-- list / / View the firewall boot status chkconfig iptables off / / turn off the firewall boot startup
1.4 restart the system
# reboot 2. Install jdk
1. Download jdk at http://www.Oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
2. Upload to the virtual machine
3. Decompress jdk
# mkdir opt
# tar-zxvf jdk-7u79-linux-x64.tar.gz
4. Add java to the environment variable
# vim / etc/profile
/ / add at the end of the file
Export JAVA_HOME=/home/master/opt/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin
Source / etc/profilejava-version
Configure ssh login-free
$ssh-keygen-t rsa (four carriage returns)
$cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys
$~ / .ssh/authorized_keys / / View rsa 4. Install hadoop2.6.0
First extract the hadoop to the opt folder
4.1 configure hadoop
4.1.1 configuring hadoop-env.sh
Modify JAVA_HOME to the location export JAVA_HOME=/home/master/opt/jdk1.7.0_79 you just configured
4.1.2 configuring core-site.xml
Add the following:
Fs.defaultFS hdfs://master:9000 hadoop.tmp.dir / home/master/opt/hadoop-2.6.0/tmp io.file.buffer.size 4096
4.1.3 configuring hdfs-site.xml
Add the following:
Dfs.replication 2 dfs.namenode.name.dir file:///home/master/opt/hadoop-2.6.0/dfs/name dfs.datanode.data.dir file:///home/master/opt/hadoop-2.6.0/dfs/data dfs.nameservices h2
Dfs.namenode.secondary.http-address master:50090 dfs.webhdfs.enabled true
4.1.4 configuring mapred-site.xml
Cp mapred-site.xml.template mapred-site.xml
Add the following:
Mapreduce.framework.name yarn true mapreduce.jobtracker.http.address master:50030 mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.webapp.address master:19888 mapred.job.tracker http://master:9001
4.1.5 configuring yarn-site.xml
Add the following:
Yarn.resourcemanager.hostname master yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.address master:8032 yarn.resourcemanager.scheduler.address master:8030 yarn.resourcemanager.resource-tracker.address master:8031 yarn.resourcemanager.admin.address master:8033 yarn.resourcemanager.webapp.address master:8088
4.2 add hadoop to the environment variable
Export HADOOP_HOME=/home/master/opt/hadoop-2.6.0 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
Source / etc/profile
4.3 format namenode
Hdfs namenode-format
4.4 start hadoop
Start HDFS first:
Sbin/start-dfs.sh
Restart YARN
Sbin/start-yarn.sh
4.4 verify that the startup is successful
Jps
2871 ResourceManager 3000 Jps 2554 NameNode 2964 NodeManager 2669 DataNode
The above is all the contents of the article "how to build hadoop pseudo-clusters". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.