In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you the comparison between spark on K8s and spark on K8s operator. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
For the current spark applications based on K8s, there are mainly two ways to run.
Spark on k8s natively supported by spark
Spark on K8s operator based on k8s operator
The former is the implementation of K8s client introduced by the spark community to support the resource management framework of K8s.
The latter is an operator developed by the K8s community to support spark.
Distinguish spark on k8sspark on k8s operator community support spark community GoogleCloudPlatform unofficial support version requires spark > = 2.3 pod create list edit delete Kubernetes > = 1.6spark > 2.3 pod create list edit delete Kubernetes > = 1.13 installation according to the official website installation, you need the create list edit delete permission of K8s pod, and you need to compile your own source code to build the image. The construction process is tedious, which requires k8s admin to install incubator/sparkoperator, and requires the permission of pod create list edit delete to be submitted using direct spark submit. For example, the following code 1 supports client and cluster modes. Spark on K8s is submitted in the form of yaml configuration file and supports client and cluster modes, such as code2. For specific parameters, refer to the advantages of spark operator configuration and conform to the way of sparker to submit tasks. For users who are used to spark, it is easier to use K8s configuration files to submit tasks. After running, driver resources will not be automatically released after running. For spark submission mode, driver resources will not be automatically released. Whether it is a client submission or an cluster submission, it inherits SparkApplication. Submit as client, and subclass JavaMainApplication, which runs in reflection. For K8s tasks, clusterManager is KubernetesClusterManager, which is no different from submitting tasks to yarn. For a k8s task, the entry of the spark program is that the KubernetesClientApplication,client side establishes a service,executor with clusterIp as None to rpc with the service, such as the interaction of task submission, and establishes a configMap with the suffix of driver-conf-map. The configMap is referenced in the form of volumn mount when the spark driver pod is established, and the contents of the file are eventually submitted to spark driver in the form of-- properties-file when driver submits the task. As a result, configuration items such as spark.driver.host are transferred to driver, and a configMap with the suffix-hadoop-config will be created at the same time, but how can a K8s image distinguish between running executor and driver? Everything is in dockerfile (specifically configured according to the difference between hadoop and kerbeors environments) and entrypoint, where shell distinguishes driver from executor Use the mechanism of K8s CRD Controller to customize CRD, and listen to the corresponding addition, deletion, modification and query event according to operator SDK. If the corresponding CRD creation event is monitored, the pod is established according to the configuration item of the corresponding yaml file, and the spark task is submitted, and the specific implementation Please refer to spark on K8s operator design. The principle of submission in cluster and client mode is the same as that of spark on K8s. Because the image is reused with spark's official image code 1-bin/spark-submit\-- master k8s:// https://192.168.202.231:6443\-- deploy-mode cluster\-- name spark-pi\-- class org.apache.spark.examples.SparkPi\-- conf spark.executor.instances=2\-- conf "spark.kubernetes.namespace=dev"\-- conf "spark.kubernetes.authenticate.driver. ServiceAccountName=lijiahong "\-conf" spark.kubernetes.container.image=harbor.k8s-test.uc.host.dxy/dev/spark-py:cdh-2.6.0-5.13.1 "\-conf" spark.kubernetes.container.image.pullSecrets=regsecret "\-- conf" spark.kubernetes.file.upload.path=hdfs:///tmp "\-- conf" spark.kubernetes.container.image.pullPolicy=Always "\ hdfs:///tmp/spark- Examples_2.12-3.0.0.jarcode 2---apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplicationmetadata: name: spark-pi namespace: devspec: type: Scala mode: cluster image: "gcr.io/spark-operator/spark:v3.0.0" imagePullPolicy: Always mainClass: org.apache.spark.examples.SparkPi mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar" sparkVersion : "3.0.0" restartPolicy: type: Never volumes:-name: "test-volume" hostPath: path: "/ tmp" type: Directory driver: cores: 1 coreLimit: "1200m" memory: "512m" labels: version: 3.0.0 serviceAccount: lijiahong volumeMounts:-name: "test-volume" mountPath: "/ tmp" Executor: cores: 1 instances: 1 memory: "512m" labels: version: 3.0.0 volumeMounts:-name: "test-volume" mountPath: "/ tmp" this is the comparison between spark on K8s and spark on K8s operator shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.