In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Installation of Hadoop 2.7.0 above JDK 1.7 does not support JDK1.6,Spark 1.5.0 and does not support JDK1.6 at first
Install Scala 2.10.4
Install Hadoop 2.x at least HDFS
Spark-env.sh
Export JAVA_HOME=export SCALA_HOME=export HADOOP_CONF_DIR=/opt/modules/hadoop-2.2.0/etc/hadoop / / export SPARK_MASTER_IP=server1export SPARK_MASTER_PORT=8888export SPARK_MASTER_WEBUI_PORT=8080export SPARK_WORKER_CORES=export SPARK_WORKER_INSTANCES=1export SPARK_WORKER_MEMORY=26gexport SPARK_WORKER_PORT=7078export SPARK_WORKER_WEBUI_PORT=8081export SPARK_JAVA_OPTS= "- verbose:gc-XX:-PrintGCDetails-XX:PrintGCTimeStamps" must be specified to run on yarn.
Slaves specifies the worker node
Xx.xx.xx.2xx.xx.xx.3xx.xx.xx.4xx.xx.xx.5
The default properties when running spark-submit are read from the spark-defaults.conf file
Spark-defaults.conf
Spark.master=spark://hadoop-spark.dargon.org:7077
Start the cluster
Start-master.shstart-salves.sh
The spark-shell command actually executes the spark-submit command.
Spark-submit-help
Deploy-mode for client (local) and cluster (cluster) of driver program (SparkContext)
The default is client, and SparkContext runs locally. If changed to cluster, SparkContext runs on the cluster.
The deployment model of hadoop on yarn is that cluster,SparkContext runs on Application Master.
Spark-shell quick-start link
Http://spark.apache.org/docs/latest/quick-start.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.