In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to achieve spark thriftserver operation and maintenance, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.
Spark thriftserver operation and maintenance:
On the root@spark_master_machine machine
Start thriftserver
/ root/cdh/spark/spark-1.4.1-bin-hadoop2.6/sbin/start-thriftserver.sh\
-- hiveconf hive.server2.thrift.port=10000\
-- hiveconf hive.server2.thrift.bind.host=spark_master_machine\
-- master spark://spark_master_machine:7077-- executor-memory 24g-- executor-cores 8-- total-executor-cores 136--driver-memory 10g-- driver-java-options-XX:MaxPermSize=2g
Stop thriftserver
/ root/cdh/spark/spark-1.4.1-bin-hadoop2.6/sbin/stop-thriftserver.sh
Note:
1. Specify more core for spark thrift server: configure spark.driver.cores in spark-defaults.conf
Through. / sbin/start-thriftserver.sh-help can be used to check which parameters can be used, did not find the way to specify driver-cores, in order to specify how many core thriftserver can use, specify in spark-defaults.conf (do not submit spark tasks on this machine, go to the script machine to submit tasks)
#
The spark-defaults.conf configuration is as follows:
Spark.master spark://spark_master_machine:7077
Spark.eventLog.enabled true
Spark.eventLog.dir hdfs:/namenodewithoutport/user/root/kk
Spark.driver.cores 10
#
two。 Specify total-executor-cores to limit the number of ExecutorID generated.
If-- executor-cores 6-- total-executor-cores 102g-- executor-memory 16g
If each worker still has idle 6core and 16 memory, then the Executor produces two, resulting in the use of the maximum memory allocated by the worker machine (32g allocated here), resulting in 34 Executor
3. (to prevent beeline from reporting errors, MaxPermSize is not enough, add configuration-- driver-java-options-XX:MaxPermSize=2g, configure this, only the spark driver program MaxPermSize of thriff server has changed, and the executor of this app is still-XX:MaxPermSize=128m)
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.