Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build a single Node with Hadoop

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "how to achieve single-node building in Hadoop". The content is simple and clear. I hope it can help you solve your doubts. Let me lead you to study and learn this article "how to achieve single-node building in Hadoop".

# Hadoop single Node Construction

Environment: VirtualBox Unbuntu14.04 LTS

# # install JDK # check the latest package information of the current java version java-version # sudo apt-get updata # install JDK sudo apt-get install default-jdk # and then view the java version java-version # # install SSH # sudo apt-get install ssh # # install rsync sudo apt-get install rsync

Configure sshssh-keygen-t dsa-p''- f ~ / .ssh/id_dsa # to generate the key ll ~ / .ssh # to see if there is an id_dsa.pub file cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys # add the public key to the license file

# # install Hadoop # first of all, make sure that the virtual machine can be linked to the network and change the virtual machine to network address translation NAT # Hadoop official website https://archive.apache.org/dist/hadoop/common # if you choose 2.6.0, use the wget command to download wget https://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/hadoop-2. .6.0.tar.gz # extract sudo tar-zxvf hadoop-2.6.0.tar.gz # move to the specified path sudo mv hadoop-2.6.0 / usr/local/hadoop # check whether the file is complete # set the environment variable sudo gedit ~ / .bashrc export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 # java path export HADOOP_HOME=/usr/local/hadoop # Hadoop path export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native # Link Library export HADOOP_OPTS= "- Djava.library.path=$HADOOP_HOME/lib" # Link Library export JAVA_LIBRARY_PATH=$HADOOP_HOME / lib/native:$JAVA_LIBRARY_PATH # restart the environment variables or enter source ~ / .bashrc # modify the hadoop configuration file # modify hadoop-env.sh sudo gedit hadoop-env.sh

# The java implementation to use. Export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

# modify core-site.xml sudo gedit core-site.xml

Fs.default.name hdfs://localhost:9000

# modify yarn-site.xml sudo gedit yarn-site.xml

Yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler

# modify mapred-site.xml sudo scp mapred-site.xml.template mapred-site.xml sudo gedit mapred-site.xml

Mapreduce.framework.name yarn

# modify hdfs-site.xml sudo gedit hdfs-site.xml

Dfs.replication 3 dfs.namenode.name.dir file:/usr/local/hadoop/hadoop_data/hdfs/namenode dfs.datanode.data.dir file:/usr/local/hadoop/hadoop_data/hdfs/datanode

# create the corresponding directory sudo mkdir-p / usr/local/hadoop/hadoop_data/hdfs/namenode # NameNode storage directory sudo mkdir-p / usr/local/hadoop/hadoop_data/hdfs/datanode # DataNode storage directory sudo chown hduser:hduser-R / usr/local/hadoop # modify the directory owner # format namenode hadoop namenode-format # launch hadoop start-dfs.sh start-yarn.sh # View

Enter http://localhost:8088 in the browser to view the Hadoop ResourceManager Web interface and click Nodes to view the current node

# enter http://localhost:50070 in the browser to view the NameNode HDFS Web interface and click Datanodes to display the currently launched Datanode

# # reference to all commands

1 cd / usr/local/ 2 ll 3 rm-rf hadoop/ 4 sudo rm-rf hadoop/ 5 ll 6 update-alternatives-- display java 7 java-version 8 sudo apt-get update 9 sudo apt-get install default-jdk 10 java-version 11 update-alternatives-display java # check See the java installation path 12 sudo apt-get install ssh 13 sudo apt-get install rsync 14 ssh-keygen-t dsa-P'- f / .ssh/id_dsa 15 ll ~ / .ssh 16 cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys 17 wget https://archive.apache.org/dist/hadoop/common/hadoop-2 .6.0/hadoop-2.6.0.tar.gz 18 ll 19 sudo tar-zxvf hadoop-2.6.0.tar.gz 20 sudo mv hadoop-2.6.0 / usr/local/hadoop 21 ll / usr/local/hadoop 22 ll 23 cd / usr/local 24 ls 25 cd hadoop/ 26 ll 27 cd / 28 cd 29 wget https://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz.md5 30 wget https://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz.mds 31 ll 32. / hadoop- 2.6.0.tar.gz.mds 33 sudo. / hadoop-2.6.0.tar.gz.mds 34 cd / usr/local/hadoop/ 35 ll 36 cd ~ 37 ll 38 wget https://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz.md5 39 ll 40 md5sum-c hadoop-2.6.0.tar.gz.md5 # check whether the downloaded file is complete 41 md5sum-c hadoop-2.6.0.tar.gz.mds 42 md5sum-c hadoop-2.6.0.tar.gz.md5 43 rm-f hadoop-2.6.0.tar.gz 44 rm-f hadoop-2.6.0.tar.gz .1 45 rm-f hadoop-2.6.0.tar.gz.mds 46 ll 47 wget https://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz 48 sudo tar-zxvf hadoop-2.6.0.tar.gz 49 sudo mv hadoop-2.6.0 / usr/local/hadoop 50 ll / usr/local/ 51 cd hadoop 52 ll / usr/local/hadoop/ 53 md5sum-c hadoop-2.6.0.tar.gz.md5 54 sudo gedit ~ / .bashrc 55 source ~ / .bashrc 56 cd / usr/local/hadoop/etc/hadoop/ 57 ll 58 sudo gedit hadoop-env.sh 59 sudo gedit Core-site.xml 60 sudo gedit yarn-site.xml 61 sudo scp mapred-site.xml.template mapred-site.xml 62 sudo gedit mapred-site.xml 63 sudo gedit hdfs-site.xml 64 sudo mkdir-p / usr/local/hadoop/hadoop_data/hdfs/namenode 65 sudo mkdir-p / usr/local/hadoop/hadoop_data/hdfs/datanode 66 sudo chown Hduser:hduser-R / usr/local/hadoop 67 hadoop namenode-format 68 start-dfs.sh 69 start-yarn.sh 70 jps 71 history are all the contents of the article "how to build a single Node in Hadoop" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report