In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
When storing a copy, HDFS will first back up one copy on the same data node, and then back up it on a different virtual rack.
The HA of namenode is guaranteed by the active / standby mode. Zookeeper is used for automatic switching, manual switching can be done through commands, and the standby node can be on the data node. A set of HDFS-HA requires at least three machines.
Installation steps:
# yum install java
# wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
# tar-zxvf hadoop-2
# mv hadoop-2.6.0 / usr/local/hadoop
Ssh password-free login, distribute the public key:
# ssh-keygen-t rsa
# ssh-copy-id-I / root/.ssh/id_rsa.pub'- p 22 root@192.168.128.129'
# vim / usr/local/hadoop/etc/hadoop/hadoop-env.sh
Export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre specifies the jdk directory export HADOOP_PID_DIR=$ {HADOOP_HOME} / pidsexport HADOOP_SSH_OPTS= "- p 1234" if it is not the ssh port of 22, this line is required to specify the port
# vim / usr/local/hadoop/etc/hadoop/core-site.xml Editing main parameters of Master Node
Hadoop.tmp.dir file:/hdfs/tmp specifies uploading to the temporary storage directory of hdfs data, and multiple hard drives can be set up to backup and improve efficiency. Fs.default.name hdfs://192.168.128.129:9000 specifies the default master access method.
# mkdir-p / hdfs/tmp
# vim / usr/local/hadoop/etc/hadoop/hdfs-site.xml Editing hdfs data Storage parameters
Number of dfs.replication 2 data backups dfs.namenode.name.dir file:/hdfs/name name nodes store data locations, multiple hard drives can be set up to back up and improve efficiency dfs.datannode.data.dir file:/hdfs/nodedata data nodes store data locations Multiple hard drives can be set up to back up and improve efficiency dfs.namenode.secondary.http-address designated backup name node ip hdfs:// standby node ip:50070 dfs.webhdfs.enabled open web page view structure function true dfs.permissions file operation forbids checking permission false
# mkdir / hdfs/name
# mkdir / hdfs/nodedata
# vim / usr/local/hadoop/etc/hadoop/slaves add the ip or hostname of the data node
# export HADOOP_ROOT_LOGGER=DEBUG,console to solve the problem of local database error, set the front end to display the log
# / usr/local/hadoop/bin/hadoop fs-ls mainly look at the required GLIBC version
# wget http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
# tar-zxvf glibc-2.14
# mkdir glibc-build
# cd glibc-build
#.. / glibc-2.14/configur-- prefix=/usr
# make & & make install's last libc5 error. Ignore him.
# shutdown-r 0 restart takes effect
Get started with:
# / usr/local/hadoop/bin/hadoop namenode-format format
# / usr/local/hadoop/sbin/start-dfs.sh starts HDFS stop-dfs.sh stop on namenode node
Visit http://ip:50070 to see the status of the web page
Monitoring:
Get HDFS status command:
# / usr/local/hadoop/bin/hadoop dfsadmin-report
If livenode is 0, delete the contents in the tmp, name, data folders of all nodes, and then format them to start.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.