In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to install Hadoop under CentOS". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to install Hadoop under CentOS.
Hadoop is an open source software framework implemented in java language under Apache. It is a software platform for developing and running large-scale data. Allows distributed processing of large datasets over a large number of computer clusters using a simple programming model.
Install Java
Before installing hadoop, make sure that Java is installed on your system. Use this command to check the version of Java that is installed.
Java-versionjava version "1.7.0375" Java (TM) SE Runtime Environment (build 1.7.0_75-b13) Java HotSpot (TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
To install or update Java, refer to the step-by-step instructions below.
The first step is to download the latest version of java from Oracle's official website.
Cd / opt/wget-no-cookies-no-check-certificate-header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie"http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz"tar xzf jdk-7u79-linux-x64.tar.gz"
You need to set to use a newer version of Java as an alternative. Use the following command to do this.
Cd / opt/jdk1.7.0_79/alternatives-- install / usr/bin/java java / opt/jdk1.7.0_79/bin/java 2alternatives-- config javaThere are 3 programs which provide 'java'.Selection Command---* 1 / opt/jdk1.7.0_60/bin/java+ 2 / opt/ Jdk1.7.0_72/bin/java3 / opt/jdk1.7.0_79/bin/javaEnter to keep the current selection [+] Or type selection number: 3 [Press Enter]
Now you may also need to use the alternatives command to set the javac and jar command paths.
Alternatives-install / usr/bin/jar jar / opt/jdk1.7.0_79/bin/jar 2alternatives-install / usr/bin/javac javac / opt/jdk1.7.0_79/bin/javac 2alternatives-set jar / opt/jdk1.7.0_79/bin/jaralternatives-set javac / opt/jdk1.7.0_79/bin/javac
The next step is to configure the environment variables. Use the following command to set these variables correctly.
Set the JAVA_HOME variable:
Export JAVA_HOME=/opt/jdk1.7.0_79
Set the JRE_HOME variable:
Export JRE_HOME=/opt/jdk1.7.0_79/jre
Set the PATH variable:
Export PATH=$PATH:/opt/jdk1.7.0_79/bin:/opt/jdk1.7.0_79/jre/bin install Apache Hadoop
After setting up the java environment. Start installing Apache Hadoop.
The first step is to create a system user account for hadoop installation.
Useradd hadooppasswd hadoop
Now you need to configure the ssh key for the user hadoop. Use the following command to enable password-free ssh login.
Su-hadoopssh-keygen-t rsacat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keyschmod 0600 ~ / .ssh/authorized_keysexit
Now download the latest available version of hadoop from the official website hadoop.apache.org.
Cd ~ wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gztar xzf hadoop-2.6.0.tar.gzmv hadoop-2.6.0 hadoop
The next step is to set the environment variables that hadoop uses.
Edit ~ / .bashrc and add the following values at the end of the file.
Export HADOOP_HOME=/home/hadoop/hadoopexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
Apply changes in the current runtime environment.
Source / .bashrc
Edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh and set the JAVA_HOME environment variable.
Export JAVA_HOME=/opt/jdk1.7.0_79/
Now, let's start by configuring a basic hadoop single-node cluster.
First edit the hadoop configuration file and make the following changes.
Cd / home/hadoop/hadoop/etc/hadoop
Let's edit core-site.xml.
Fs.default.namehdfs://localhost:9000
Then edit the hdfs-site.xml:
Dfs.replication1dfs.name.dir file:///home/hadoop/hadoopdata/hdfs/namenodedfs.data.dirfile:///home/hadoop/hadoopdata/hdfs/datanode
And edit mapred-site.xml:
Mapreduce.framework.nameyarn
Finally, edit yarn-site.xml:
Yarn.nodemanager.aux-servicesmapreduce_shuffle
Now format the namenode using the following command:
Hdfs namenode-format
To start all hadoop services, use the following command:
Cd / home/hadoop/hadoop/sbin/start-dfs.shstart-yarn.sh
To check that all services are started properly, use the jps command:
Jps
You should see this output.
26049 SecondaryNameNode25929 DataNode26399 Jps26129 JobTracker26249 TaskTracker25807 NameNode
You can now access the Hadoop service: http://your-ip-address:8088/ in your browser.
At this point, I believe you have a deeper understanding of "how to install Hadoop under CentOS". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.