In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the knowledge of "how to install Apache Hadoop on CentOS". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
The Apache Hadoop software library is a framework that allows distributed processing of large datasets on a computer cluster using a simple programming model. Apache ™Hadoop ®is reliable, scalable, and open source software for distributed computing.
The project includes the following modules:
Hadoop Common: a common tool that supports other Hadoop modules.
Hadoop distributed file system (HDFS ™): a distributed file system that provides high-throughput access to application data.
Hadoop YARN: job scheduling and cluster resource management framework.
Hadoop MapReduce: a parallel processing system for large datasets based on YARN.
This article will help you step by step to install hadoop on CentOS and configure a single-node hadoop cluster.
Install Java
Before installing hadoop, make sure that Java is installed on your system. Use this command to check the version of Java that is installed.
Java-version java version "1.7.0375" Java (TM) SE Runtime Environment (build 1.7.0_75-b13) Java HotSpot (TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
To install or update Java, refer to the step-by-step instructions below.
The * step is to download the * * version of java from the official website of Oracle.
Cd / opt/ wget-no-cookies-no-check-certificate-header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie"http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz" tar xzf jdk-7u79-linux-x64.tar.gz"
You need to set to use a newer version of Java as an alternative. Use the following command to do this.
Cd / opt/jdk1.7.0_79/ alternatives-install / usr/bin/java java / opt/jdk1.7.0_79/bin/java 2 alternatives-config javaThere are 3 programs which provide 'java'. Selection Command-* 1 / opt/jdk1.7.0_60/bin/java + 2 / opt/jdk1.7.0_72/bin/java 3 / opt/jdk1.7.0_79/bin/java Enter to keep the current selection [+] Or type selection number: 3 [Press Enter]
Now you may also need to use the alternatives command to set the javac and jar command paths.
Alternatives-install / usr/bin/jar jar / opt/jdk1.7.0_79/bin/jar 2 alternatives-install / usr/bin/javac javac / opt/jdk1.7.0_79/bin/javac 2 alternatives-set jar / opt/jdk1.7.0_79/bin/jar alternatives-set javac / opt/jdk1.7.0_79/bin/javac
The next step is to configure the environment variables. Use the following command to set these variables correctly.
Set the JAVA_HOME variable:
Export JAVA_HOME=/opt/jdk1.7.0_79
Set the JRE_HOME variable:
Export JRE_HOME=/opt/jdk1.7.0_79/jre
Set the PATH variable:
Export PATH=$PATH:/opt/jdk1.7.0_79/bin:/opt/jdk1.7.0_79/jre/bin
Install Apache Hadoop
After setting up the java environment. Start installing Apache Hadoop.
The * step is to create a system user account for hadoop installation.
Useradd hadoop passwd hadoop
Now you need to configure the ssh key for the user hadoop. Use the following command to enable password-free ssh login.
Su-hadoop ssh-keygen-t rsa cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys chmod 0600 ~ / .ssh/authorized_keys exit
Now download the available version of hadoop from the official website hadoop.apache.org.
Cd ~ wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz tar xzf hadoop-2.6.0.tar.gz mv hadoop-2.6.0 hadoop
The next step is to set the environment variables that hadoop uses.
Edit ~ / .bashrc and add the following values at the end of the file.
Export HADOOP_HOME=/home/hadoop/hadoop export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
Apply changes in the current runtime environment.
Source / .bashrc
Edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh and set the JAVA_HOME environment variable.
Export JAVA_HOME=/opt/jdk1.7.0_79/
Now, let's start by configuring a basic hadoop single-node cluster.
First edit the hadoop configuration file and make the following changes.
Cd / home/hadoop/hadoop/etc/hadoop
Let's edit core-site.xml.
Fs.default.name hdfs://localhost:9000
Then edit the hdfs-site.xml:
Dfs.replication 1 dfs.name.dir file:///home/hadoop/hadoopdata/hdfs/namenode dfs.data.dir file:///home/hadoop/hadoopdata/hdfs/datanode
And edit mapred-site.xml:
Mapreduce.framework.name yarn
* Edit yarn-site.xml:
Yarn.nodemanager.aux-services mapreduce_shuffle
Now format the namenode using the following command:
Hdfs namenode-format
To start all hadoop services, use the following command:
Cd / home/hadoop/hadoop/sbin/start-dfs.shstart-yarn.sh
To check that all services are started properly, use the jps command:
Jps
You should see this output.
26049 SecondaryNameNode 25929 DataNode 26399 Jps 26129 JobTracker 26249 TaskTracker 25807 NameNode
You can now access the Hadoop service: http://your-ip-address:8088/ in your browser.
This is the end of "how to install Apache Hadoop on CentOS". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.