In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Spark cluster deployment
I. preparatory work
Prepare 3 machines to create a cluster with a hostname and IP address of
Master 192.168.2.240
Slave1 192.168.2.241
Slave2 192.168.2.242
Download software
Scala: https://downloads.lightbend.com/scala/2.12.3/scala-2.12.3.tgz
Spark: http://mirrors.hust.edu.cn/apache/spark/spark-2.2.0/spark-2.2.0-bin-hadoop2.6.tgz
JDK:
Http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz
II. Environmental configuration
2.1.Configuring ssh password-free login
Execute the following command on the master host
Ssh-keygen-t rsa # create public and private keys
Ssh-copy-id slave1 # uploads the private key file to slave1 and slave2, requiring password verification for the first time.
Ssh-copy-id slave2
After doing this, you don't have to enter your password to log in to slave1,slave2 from master.
2.2.install JDK
Extract the jdk installation package
Tar-zxf jdk-8u151-linux-x64.tar.gz-C / usr/local/
Ln-sv / usr/local/jdk_1.8.0_151 jdk
Vi / etc/profile.d/jdk.sh
Export JAVA_HOME=/usr/local/jdk/
Export PATH=$PATH:$JAVA_HOME/bin
Chmod 755 / etc/profile.d/jdk.sh
. / etc/profile.d/jdk.sh
Check the Java version
Java-version
Java version "1.7.0,75"
Java (TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot (TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
2.3.Install Scala
Extract the installation package
Tar-zxf scala-2.12.3.tgz-C / us r/local
Vi / etc/profile.d/scala.sh
Export SCALA_HOME=/usr/local/scala-2.12.3
Export PATH=$PATH:$SCALA_HOME/bin
Chmod 755 / etc/profile.d/scala.sh
. / etc/profile.d/scala.sh
Scala-version
Scala code runner version 2.12.3-Copyright 2002-2013, LAMP/EPFL
The Scala environment configuration is complete.
3. Start pressing Spark cluster
Extract the installation package
Tar-zxf spark-2.2.0-bin-hadoop2.6.tgz-C / opt
Cd / opt
Mv spark-2.2.0-bin-hadoop2.6 spark-2.2.0
Configure the Spark environment
Cd / opt/spark-2.2.0/conf/
Cp spark-env.sh.template spark-env.sh
Spark-env.sh, add some content.
Export JAVA_HOME=/usr/local/jdk
Export SCALA_HOME=/usr/local/scala-2.11.0/
Export HADOOP_HOME=/opt/cloudera/parcels/CDH-5.8.0-1.cdh6.8.0.p0.42/lib/hadoop/
Export HADOOP_CONF_DIR=/opt/cloudera/parcels/CDH-5.8.0-1.cdh6.8.0.p0.42/lib/hadoop/etc/hadoop/
Export SPARK_MASTER_IP=master
Export SPARK_LOCAL_DIRS=/opt/spark-2.2.0
Export SPARK_WORKER_MEMORY=512m
Export SPARK_WORKER_CORES=2
Export SPARK_WORKER_INSTANCES=1
Variable description
JAVA_HOME:Java installation directory SCALA_HOME:Scala installation directory HADOOP_HOME:hadoop installation directory configuration file directory of the HADOOP_CONF_DIR:hadoop cluster ip address of the Master node of the SPARK_MASTER_IP:spark cluster SPARK_WORKER_MEMORY: maximum amount of memory that each worker node can allocate to exectors spark _ WORKER_CORES: number of CPU cores per worker node SPARK_WORKER_INSTANCES: number of worker nodes open on each machine
Configure slave host
Cp slaves.template slaves
Add slave hosts to the slaves file
Slave1
Slave2
Distribute the configured spark-2.2.0 folder to all slave hosts
Scp-rp spark-2.2.0 slave1:/opt
Scp-rp spark-2.2.0 slave2:/opt
Start the Spark cluster
/ opt/spark-2.2.0/sbin/start-all.sh
Verify that Spark starts successfully, through the jps command
Master should have a master process
8591 Master
Slave should have a Worker process
1694 Worker
Spark Web management page address: http://master:8080/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.