In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the knowledge of "how to configure Spark Executor in the Kubernetes environment". Many people will encounter this dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
When Spark performs a task, it needs to access many ports of Executor, which are random and accessed by host name. Therefore, it is difficult to access directly between Kubernetes environment and big data environment. Big data cluster can access the Spark Executor running in the Kubernetes environment through the following configuration
1. When Spark Executor executes, there are many random ports. When running in K8S environment, its ports need to be fixed. The range of ports is the port range assigned by K8S cluster NodePort: 30000-32767.
# API for driver listening. This is used to communicate with executors and independent master (default random) spark_driver_port: 30920#driver file server listening port (default random) spark_fileserver_port: 30921#driver HTTP broadcast server listening port (default random) spark_broadcast_port: 30922#driver HTTP class server listening port (default random) spark_replClassServer_port: 3092blocks block manager listening port. These exist on both driver and executors (default random) ports on which spark_blockManager_port: 30924#executor listens. Used to communicate with driver (default random) spark_executor_port: 30925
2. Create a StatefulSet for Spark Executor and get a DNS domain name: $(podname). (headless server name) .namespace.svc.cluster.local
ApiVersion: apps/v1kind: StatefulSetmetadata:name: my-executor-statefulsetnamespace: [namespace] labels:app: my-executor-statefulsetspec:serviceName: my-executorreplicas: 1selector:matchLabels:app: my-executor-podversion: [version] template:metadata:labels:app: my-executor-podversion: [version] spec:containers:-name: my-executor-pod image: 192.168.0.12 Version]-[ru] imagePullPolicy: Always ports:-containerPort: 5011hostAliases:-hostnames:-hadoop-master01ip: 192.168.0.10-hostnames:-hadoop-slave02ip: 192.168.0.11
3. To create a NodePort type Service for Spark Executor, you need to configure the fixed port just configured in the first step.
ApiVersion: v1kind: Servicemetadata:name: my-executor-svcnamespace: [namespace] labels: app: my-executor-podspec:ports:-port: 5011 name: tcp-port protocol: TCP-port: 4040 name: spark-http-port protocol: TCP nodePort: 30028-port: 30920 name: spark-driver-port protocol: TCP nodePort: 30920-port: 30921 name: spark-fileserver-port protocol: TCP nodePort: 30921-port: 30922 name: spark-broadcast-port protocol: TCP nodePort: 30922-port: 30923 name: spark-eplclassserver-port protocol: TCP nodePort: 30923-port: 30924 name: spark-blockmanager-port protocol: TCP nodePort: 30924-port: 30925 name: spark-executor-port protocol: TCP nodePort: 30925selector: app: my-executor-podtype: NodePort
4. Configure all the machines in big data environment with the DNS domain name with hosts as StatefulSet: $(podname). (headless server name) .namespace.svc.cluster.local, and set the IP address to any IP in K8S.
192.168.0.12 my-executor-statefulset-0.my-executor.test2.svc.cluster.local "how to configure Spark Executor in a Kubernetes environment" ends here. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 223
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.