In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to install Spark Linux system, I believe that most people do not know much about it, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to understand it!
Spark is a fast and general computing engine specially designed for large-scale data processing.
Introduction to Spark: describing Spark in the simplest language may sound like Baidu encyclopedia: Spark is a general-purpose distributed data processing engine.
The above sentence may sound abstract, let's explain it word by word: universal means that Spark can do a lot of things. What we have just mentioned, including machine learning, data flow transmission, interaction analysis, ETL, batch processing, graph computing, and so on, can be done by Spark. You can even say that you can try Spark for anything you need to do with data. Distributed: it means that the ability of Spark to process data is built on many machines, can be docked with distributed storage systems, and can be scaled out (to put it simply, the more computers, the more powerful) engine: the so-called engine, which means that Spark itself can not store data, it is like the mechanical engine of an entity. The fuel (data for Spark) is converted into the form that the user needs-such as driving the car, and, for example, getting a desired target conclusion. But in any case, a skillful wife cannot make bricks without rice, and it is absolutely impossible without data.
Linux system installation Spark specific steps: installation convention
Software upload directory: / opt/soft
Software installation directory: / opt
Environment variable profile: / etc/profile.d/hadoop-etc.sh
Environment dependency: zookeeper and Scala need to be installed
1) decompress the hadoop installation software
Tar-zxvf / opt/soft/spark-2.0.2-bin-hadoop2.7.tgz-C / opt/
2) rename
Mv / opt/spark-2.0.2-bin-hadoop2.7 / opt/spark
3) copy and rename the configuration file
Cd / opt/spark/conf
Cp spark-env.sh.template spark-env.sh
4) modify spark-env.sh configuration file
Vi spark-env.sh
Export JAVA_HOME=/opt/jdk
Export SCALA_HOME=/opt/scala
Export SPARK_MASTER_IP=lky01
Export SPARK_MASTER_PORT=7077
Export SPARK_WORKER_CORES=1
Export SPARK_WORKER_INSTANCES=1
Export SPARK_WORKER_MEMORY=1g
Export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
5) copy and rename the slaves.template file
Cp slaves.template slaves
6) modify slaves configuration file
Add two lines of records (log out localhost)
Lky02
Lky03
7) copy the mysql driver shelf package mysql-connector-java-5.1.39-bin.jar to the / opt/spark/jars directory
8) copy spark to other machines
Scp-r / opt/spark root@lky02:/opt
Scp-r / opt/spark root@lky03:/opt
9) copy environment variables to other machines
Scp / etc/profile.d/hadoop-etc.sh root@lxq2:/etc/profile.d/
Scp / etc/profile.d/hadoop-etc.sh root@lxq03:/etc/profile.d/
10) make the configuration effective: source / etc/profile.d/hadoop-etc.sh
Modify startup item
To avoid conflicts with start/stop-all.sh scripts in hadoop, rename spark/sbin/start/stop-all.sh
Cd / opt/spark/sbin/
Mv start-all.sh start-spark-all.sh
Mv stop-all.sh stop-spark-all.sh
11) start spark
Sbin/start-spark-all.sh
12) access authentication
Access the spark web service: http://ip:8080
The above is all the contents of the article "how to install Spark in Linux system". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.