In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
= = > Spark cluster architecture
->
= > installation and deployment of Spark
There are four modes of Spark installation and deployment: Standalone, YARN, Mesos, and Amazon EC2. Here we mainly explain the Standalone mode.
-> preparation for deployment of the environment: (no more details here)
-four Linux hosts (virtual machines)
-modify hostname
-password-free login
-install JDK environment
Deployment of Spark Standalone pseudo-distribution
Wget tar zxf spark-2.2.1-bin-hadoop2.7.tgz-C / appcd / app/spark-2.2.1-bin-hadoop2.7/confcp spark-env.sh.template spark-env.shcp slaves.template slaves -vim spark-env.sh export JAVA_HOME=/app/java/jdk1.8.0_102 export SPARK_MASTER_HOST=bigdata0 export SPARK_MASTER_PORT=7077 -vim slaves bigdata0
-> Spark Standalone fully distributed deployment
-Environmental architecture:
Masterbigdata1
Workerbigdata2bigdata3bigdata4
-Master node deployment:
Wget http://mirrors.hust.edu.cn/apache/spark/spark-2.2.1/spark-2.2.1-bin-hadoop2.7.tgz tar zxf spark-2.2.1-bin-hadoop2.7.tgz-C / appcd / app/spark-2.2.1-bin-hadoop2.7/confcp spark-env.sh.template spark-env.shcp slaves.template slaves- -vim spark-env.sh export JAVA_HOME=/app/java/jdk1.8.0_102 export SPARK_MASTER_HOST=bigdata0 export SPARK_MASTER_PORT=7077- -vim slaves bigdata2 bigdata3 bigdata4
-cp the installation directory of the master node to other slave nodes
Scp-r spark-2.2.1-bin-hadoop2.7/ bigdata2:/app & scp-r spark-2.2.1-bin-hadoop2.7/ bigdata3:/app & scp-r spark-2.2.1-bin-hadoop2.7/ bigdata4:/app &
-start
Start-all.sh
Implementation of Spark HA
There are two ways to implement Spark HA:
Single point of failure recovery based on file system: there is only one master node and can only be used for development and testing.
-Features: put the running information of Spark into a local recovery directory. If Master is dead, read the previous information from the recovery directory when restoring master.
-configuration: modify the spark-env.sh file based on standalone. The file content is as follows:
Vim spark-env.sh export JAVA_HOME=/app/java/jdk1.8.0_102 export SPARK_MASTER_HOST=bigdata0 export SPARK_MASTER_PORT=7077 export SPARK_DAEMON_JAVA_OPTS= "- Dspark.deploy.recoveryMode=FILESYSTEM-Dspark.deploy.recoveryDirectory=/data/spark_recovery"
-Parameter explanation:
-spark.deploy.recoveryMode
= > the default value of this parameter is: None
= > single point of failure repair based on file system: FILESYSTEM
= > Master of Standby based on Zookeeper: ZOOKEEPER
-spark.deploy.recoveryDirectory specifies the recovery directory
-Test: bin/spark-shell-master spark://bigdata1:7077
-> Master of Standby based on ZooKeeper
-characteristics:
Zookeeper provides a Leader Election mechanism, which can be used to ensure that although there are multiple Master in the cluster, only one of them is Active, and the others are Standby. When the Master of Active fails, another Standby Master will be elected. Since the information of the cluster, including Worker, Driver and Application, has been persisted to Zookeeper, the handover process will only affect the submission of the new Job and has no effect on the ongoing Job.
-configuration: modify the spark-env.sh file on the basis of standalone. The empty space in the file is as follows:
Vim spark-env.sh export JAVA_HOME=/app/java/jdk1.8.0_102 export SPARK_DAEMON_JAVA_OPTS= "- Dspark.deploy.recoveryMode=ZOOKEEPER-Dspark.deploy.zookeeper.url=bigdata2:2181,bigdata3:2181,bigdata4:2181-Dspark.deploy.zookeeper.dir=/spark"
-Test:
Bigdata1: sbin/start-all.shbigdata2: sbin/start-master.sh
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.