In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "installation steps of spark2.0 cluster environment". Many people will encounter such a dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Spark2.0 has been announced on apache's website. You can download it. Today I'm going to take you to install spark2.0.
.spark2.0 requires that the version of hadoop2.7.2 + scala2.11+java7 is not enough, please upgrade first
1. Install hadoop2.7.2
Download address: http://hadoop.apache.org/releases.html
Decompress tar-zxvf after download
I am using the configuration of pseudo-distributed virtual machines here.
If you are not familiar with hadoop, you can refer to http://www.linuxidc.com/Linux/2016-02/128729.htm?1456669335754 for installation.
Tip: for hadoop testing, do not only use jps to see the progress, this is not accurate. The correct action is to access ports 50070 and 8088 through the page
two。 Install scala2.11
Download address is http://www.scala-lang.org/download/
Decompress after download
Configure the environment variable vi / etc/profile
These operations are simple. I guess everyone will. If I don't describe in detail here, what I don't know can go to Baidu.
The test for scala is scala-version
3. Install spark2.0
Download address is http://spark.apache.org/news/spark-2-0-0-released.html
Decompress tar-zxvf after download
Start configuration after decompression
First, match the environment variables of spark in / etc/profile.
Then configure conf/slaves for spark
Add the hostname of slaves
And then start it.
Sbin/start-all.sh
After starting successfully, you can access the above interface. Note that the ip of master is port 8080.
Thus, the spark environment is successfully installed.
Then you can run a demo to test it, for example
Bin/run-example org.apache.spark.example.SparkPi
The approximate value of pi is calculated successfully. So far, the construction of the spark environment has been completed.
This is the end of the "installation steps for spark2.0 Cluster Environment". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.