In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to run spark-shell on hadoop YARN", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to run spark-shell on hadoop YARN" this article.
1. Spark schema architecture diagram! [] (https://cache.yisu.com/upload/information/20210522/355/683134.png "enter picture title here") 2. Scala download and install a. Official website: http://www.scala-alng.org/files/archive/ b. Select a good version, copy the link, and use the wget command to download wget http://www.scala-alng.org/files/archive/scala-2.11.6.tgz c. Extract tar xvf scala-2.11.6.tgz sudo mv scala-2.11.6 / usr/local/scala # and move scala to the / usr/local directory d. Set the environment variable sudo gedit ~ / .bashrc export SCALA_HOME=/usr/local/scala export PATH=$PATH:$SCALA_HOME/bin source ~ / .bashrc # to make the configuration effective e. Start scala hduser [@ master] (https://my.oschina.net/u/48054):~$ scala3. Spark installs a. Official website: http://spark.apache.org/downloads.html b. Select version 1.4 | | Pre-built for Hadoop 2.6 and later | | copy link download c. Wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.6.tgz d using the wget command. Extract and move to / usr/local/spark/ e. Edit the environment variable f. Sudo gedit ~ /. Bashrc export SPARK_HOME=/usr/local/spark export PATH=$PATH:$SPARK_HOME/bin g. Source ~ /. Bashrc # to make the configuration effective 4. Launch the spark-shell interactive page hduser [@ master] (https://my.oschina.net/u/48054):~$ spark-shell5. Start hadoop6. Run spark-shell a. Spark-shell-- master local [4] b. Read the local file val textFile=sc.textFile ("file:/usr/local/spark/LREADME.md") textFile.count7. Run spark-shell SPARK_JAR=/usr/local/spark/lib/spark-assembly-1.4.0-hadoop2.6.0.jar HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop MASTER=yarn-client / usr/local/spark/bin/spark-shell SPARK_JAR=/usr/local/spark/lib/spark-assembly-1.4.0-hadoop2.6.0.jar # on Hadoop Yarn to set the sparkjar file path HADOOP_CONF _ DIR=/usr/local/hadoop/etc/hadoop # sets the hadoop configuration file directory MASTER=yarn-client # sets the running mode to be the full path of the spark-shell to be run by yarn-client / usr/local/spark/bin/spark-shell # 8. Build the Spark Standalone Cluster execution environment a. Cp / usr/local/spark/conf/spark-env.sh.template / usr/local/spark/conf/spark-env.sh # copy template file in setting b. Set up the IP export SPARK_WORKER_CORES=1 of spark-env.sh c. Sudo gedit / usr/local/spark/conf/spark-env.sh export SPARK_MASTER_IP=master master cpu core export SPARK_WORKER_MEMORY=600m used by each worker, memory export SPARK_WORKER_INSTANCES=1 used by each worker, set each worker instance # be sure to pay attention to your memory. After hadoop+spark runs on multiple virtual machines, 8 gigabytes of memory is insufficient, which consumes a lot of memory. Use ssh to link data1,data2 and create the spark directory sudo mkdir / usr/local/spark sudo chown hduser:hduser / usr/local/spark # to do the above for data1 and data2. Copy the spark of master to sudo scp-r / usr/local/spark hduser@data1:/usr/local sudo scp-r / usr/local/spark hduser@data2:/usr/local f. Edit the slaves file sudo gedit / usr/local/spark/conf/slaves data1 data29. Run spark-shell a. Start Spark Standalone Cluster / usr/local/spark/sbin/start-all.sh b. Run spark-shell-- master spark://master:7077 c. View the Spark Standalone Web UI interface http://master:8080/ d. Stop Spark Standalone Cluster / usr/local/spark/sbin/stop-all.sh 10. Command reference 152 scala 153 jps 154 wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.6.tgz 155 ping www.baidu.com 156 ssh data3 157 ssh data2 158 ssh data1 159 jps 161jps 162spark-shell 163spark-shell-- master local [4] 164SPARK_JAR=/usr/local/spark/lib/spark-assembly-1.4.0-hadoop2.6.0.jar HADOOP_CONF_DIR=/usr/local/hadoop/etc / hadoop MASTER=yarn-client / usr/local/spark/bin/spark-shell 165ssh data2 166ssh data1 167cd / usr/local/hadoop/etc/hadoop/ 168ll 169 sudo gedit masters 170sudo gedit / etc/hosts 172sudo gedit hdfs-site.xml 173sudo rm-rf / usr/local/hadoop/hadoop_data/hdfs 174mkdir-p / usr/local/hadoop/hadoop_data/hdfs/namenode 175sudo chown-R hduser:hduser / usr/local/hadoop 176hadoop namenode-format 177start- All.sh 178 jps 179 spark-shell 180 SPARK_JAR=/usr/local/spark/lib/spark-assembly-1.4.0-hadoop2.6.0.jar HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop MASTER=yarn-client / usr/local/spark/bin/spark-shell 181 ssh data1 182 ssh data2 183 ssh data1 184 start-all.sh 185 jps 186 cp / usr/local/spark/conf/spark-env.sh.template / usr/local/spark/conf/spark-env.sh 187 Sudo gedit / usr/local/spark/conf/spark-env.sh 188 sudo scp-r / usr/local/spark hduser@data1:/usr/local 189 sudo scp-r / usr/local/spark hduser@data2:/usr/local 190 sudo gedit / usr/local/spark/conf/slaves 191 / usr/local/spark/sbin/start-all.sh 192 spark-shell-- master spark://master:7077 193 / usr/local/spark/sbin/stop-all.sh 194 jps 195 stop- All.sh 196history and above are all the contents of the article "how to run spark-shell on hadoop YARN" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.