In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to build the hadoop2.7+Spark1.4 environment", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to build the hadoop2.7+Spark1.4 environment" this article.
I. pseudo-distributed construction of hadoop
In fact, the official website has a more detailed explanation, good English can directly look at the official website, address
1. Install JDK1.7
This omission, the official website shows that 1.6 is OK, but I have an exception with openjdk1.6, JDK1.6 did not try, directly used JDK1.7
Configure environment variables
Vi / etc/profileexport JAVA_HOME=/usr/local/jdk1.7.0_79export CLASSPATH=.:$JAVE_HOME/lib.tools.jarexport PATH=$PATH:$JAVA_HOME/bin
After adding, execute the command to make the configuration effective.
Source / etc/profile2, install ssh, rsync, (take ubuntu as an example)
$sudo apt-get install ssh $sudo apt-get install rsync3, download hadoop compilation package, image address (domestic image is even slower than US, unbearable, note that the compilation package of 2.7is 64-bit)
Check whether it is 32 or 64-bit.
Cd hadoop-2.7.0/lib/nativefile libhadoop.so.1.0.0hadoop-2.7.0/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped4, configuration environment variables
The hadoop profile specifies the java path
Etc/hadoop/hadoop-env.sh
Export JAVA_HOME=/usr/local/jdk1.7.0_79
System environment variable
Export HADOOP_HOME=/usr/local/hadoop-2.7.0export PATH=$PATH:$HADOOP_HOME/binexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS= "- Djava.library.path=$HADOOP_HOME/lib"
The last two items will appear if they are not added.
You have loaded library / usr/hadoop/hadoop-2.7.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack-c', or link it with'- z noexecstack'.
After adding, execute the command to make the configuration effective.
Source / etc/profile
Execute the command to see if it is successful
Hadoop version5, modify hadoop configuration file
Etc/hadoop/core-site.xml:
Fs.defaultFS hdfs://localhost:9000
Etc/hadoop/hdfs-site.xml:
Dfs.replication 16. Set hadoop ssh password-free access to $ssh-keygen-t dsa-P''- f ~ / .ssh/id_dsa $cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys $export HADOOP\ _ PREFIX=/usr/local/hadoop-2.7.07, format the node and start hadoop$ bin/hdfs namenode-format$ sbin/start-dfs.sh
Open the browser http://localhost:50070/ to see if it is successful
Hdfs configuration: username had better be the same as the current user name, otherwise permission problems may occur.
$bin/hdfs dfs-mkdir / user $bin/hdfs dfs-mkdir / user/8, yarn configuration
Etc/hadoop/mapred-site.xml:
Mapreduce.framework.name yarn
Etc/hadoop/yarn-site.xml:
Yarn.nodemanager.aux-services mapreduce_shuffle
Start yarn
$sbin/start-yarn.sh
Http://localhost:8088/ to see if it is successful.
At this point, the hadoop single node pseudo-distributed installation and configuration is complete.
II. Spark installation and configuration
The installation of spark is relatively simple
1. Download the address first.
Because I already have hadoop, I choose the second download.
2. Enter the directory cd confcp spark-env.sh.template spark-env.shcp spark-defaults.conf.template spark-defaults.confvi conf/spark-env.sh after downloading and decompressing.
Finally add
Export HADOOP_HOME=/usr/local/hadoop-2.7.0export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport SPARK_DIST_CLASSPATH=$ (hadoop classpath)
The last one requires hadoop to add an environment variable.
There are no first two configurations in the official website configuration. I always report an error when I run the example, and I can't find the hdfs jar package.
3. Go back to the decompressed directory and run the example. / bin/run-example SparkPi 10
If the configuration is successful, the configuration is complete.
These are all the contents of the article "how to build a hadoop2.7+Spark1.4 Environment". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.