In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Run the architecture diagram
Download and compile
2.1 download the source code and extract it
Download address: tar-zxvf v2.3.2.tar.gz
2.2 compilation
Cd spark-2.3.2build/mvn install-DskipTestsbuild/mvn compile-Pkubernetes-pl resource-managers/kubernetes/core-am-DskipTestsbuild/mvn install-Pkubernetes-pl resource-managers/kubernetes/core-am-DskipTests [root@compile spark-2.3.2] # ls assembly/target/scala-2.11/jars/-la | grep spark-kub*-rw-r--r-- 1 root root 381120 Sep 26 09:56 spark-kubernetes_2.11-2.3.2.jardev/make-distribution .sh-- tgz-Phadoop-2.7-Pkubernetes
Build tar that supports R language and hive
. / dev/make-distribution.sh-- name inspur-spark-- pip-- r-- tgz-Psparkr-Phadoop-2.7-Phive- Phive-thriftserver-Pkubernetes
Error:
+ + echo 'Cannot find'\ 'Renewable Home'\'. Please specify'\''or make sure R is properly installed.'Cannot find''Renewable Home.' Please specify'Remehom'or make sure R is properly installed.
We are only testing Spark running on kubernetes this time, so we do not need to solve this problem for the time being.
Construct Docker Image./bin/docker-image-tool.sh-r bigdata.registry.com:5000-t 2.3.2 build./bin/docker-image-tool.sh-r bigdata.registry.com:5000-t 2.3.2 push
When building an image, the installation source dl-cdn.alpinelinux.org may not be connected and modified to use the installation source of Ali Cloud:
Modify. / resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile
RUN set-ex & &\ sed-I's Union dlmercdn.alpinelinux.orgUniqmirrors.aliyun.com no-cache & &\ apk add-- no-cache bash tini libc6-compat linux-pam & &\ mkdir-p / opt/spark & &\ mkdir-p / opt/spark/work-dir & &\ touch / opt/spark/RELEASE & &\ rm / bin/sh &\ ln-sv / bin/bash / bin/sh & &\ echo "auth required pam_wheel.so use_uid" > > / etc/pam.d/su & &\ chgrp root / etc/passwd & & chmod ug+rw / etc/passwd
Because the warehouse insight is created in the local private harbor
Therefore, execute the following command push Image:
Docker tag bigdata.registry.com:5000/spark:2.3.2 bigdata.registry.com:5000/insight/spark:2.3.2docker push bigdata.registry.com:5000/insight/spark:2.3.2 uploads examples.jar to httpd service [root@compile spark-2.3.2] # ll dist/examples/jars/spark-examples_2.11-2.3.2.jar-rw-r--r-- 1 root root 1997551 Sep 26 09:56 dist/examples/ Jars/spark-examples_2.11-2.3.2.jar [root@compile spark-2.3.2] # cp dist/examples/jars/spark-examples_2.11-2.3.2.jar / opt/mnt/www/html/spark/ [root@compile spark-2.3.2] # ll / opt/mnt/www/html/spark/-rw-r--r-- 1 root root 1997551 Sep 26 10:26 spark-examples_2.11-2.3.2.jar prepares the kubernetes environment That is, authorized kubectl create serviceaccount spark- nsparkkubectl create clusterrolebinding spark-role-clusterrole=edit-serviceaccount=spark:spark-namespace=spark
-- seriveaccount=spark:spark the former spark refers to namespace, and the latter spark refers to serviceaccount
Test bin/spark-submit\-- master k8s:// http://10.221.129.20:8080\-- deploy-mode cluster\-- name spark-pi\-- class org.apache.spark.examples.SparkPi\-- conf spark.executor.instances=1\-- conf spark.kubernetes.container.image=bigdata.registry.com:5000/insight/spark:2.3.2\-- conf spark.kubernetes.namespace=spark\-- conf spark.kubernetes.authenticate.driver.serviceAccountName=spark\ http:// 10.221.129.22/spark/spark-examples_2.11-2.3.2.jar
Run log:
2018-09-26 10:27:54 WARN Utils:66-Kubernetes master URL uses HTTP instead of HTTPS.
Ignoring.
2018-09-26 10:28:27 INFO LoggingPodStatusWatcherImpl:54-State changed, new state:
Pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
Namespace: default
Labels: spark-app-selector-> spark-74d52904a3794e8986895a12322c5cd9, spark-role-> driver
Pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
Creation time: 2018-09-26T02:28:27Z
Service account name: default
Volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
Node name: N/A
Start time: N/A
Container images: N/A
Phase: Pending
Status: []
2018-09-26 10:28:27 INFO LoggingPodStatusWatcherImpl:54-State changed, new state:
Pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
Namespace: default
Labels: spark-app-selector-> spark-74d52904a3794e8986895a12322c5cd9, spark-role-> driver
Pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
Creation time: 2018-09-26T02:28:27Z
Service account name: default
Volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
Node name: master2
Start time: N/A
Container images: N/A
Phase: Pending
Status: []
2018-09-26 10:28:27 INFO LoggingPodStatusWatcherImpl:54-State changed, new state:
Pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
Namespace: default
Labels: spark-app-selector-> spark-74d52904a3794e8986895a12322c5cd9, spark-role-> driver
Pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
Creation time: 2018-09-26T02:28:27Z
Service account name: default
Volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
Node name: master2
Start time: 2018-09-26T02:28:27Z
Container images: bigdata.registry.com:5000/insight/spark:2.3.2
Phase: Pending
Status: [ContainerStatus (containerID=null, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=, lastState=ContainerState (running=null, terminated=null, waiting=null, additionalProperties= {}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState (running=null, terminated=null, waiting=ContainerStateWaiting (message=null, reason=PodInitializing, additionalProperties= {}), additionalProperties= {}), additionalProperties= {})]
2018-09-26 10:28:28 INFO Client:54-Waiting for application spark-pi to finish...
2018-09-26 10:28:51 INFO LoggingPodStatusWatcherImpl:54-State changed, new state:
Pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
Namespace: default
Labels: spark-app-selector-> spark-74d52904a3794e8986895a12322c5cd9, spark-role-> driver
Pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
Creation time: 2018-09-26T02:28:27Z
Service account name: default
Volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
Node name: master2
Start time: 2018-09-26T02:28:27Z
Container images: bigdata.registry.com:5000/insight/spark:2.3.2
Phase: Pending
Status: [ContainerStatus (containerID=null, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=, lastState=ContainerState (running=null, terminated=null, waiting=null, additionalProperties= {}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState (running=null, terminated=null, waiting=ContainerStateWaiting (message=null, reason=PodInitializing, additionalProperties= {}), additionalProperties= {}), additionalProperties= {})]
2018-09-26 10:28:56 INFO LoggingPodStatusWatcherImpl:54-State changed, new state:
Pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
Namespace: default
Labels: spark-app-selector-> spark-74d52904a3794e8986895a12322c5cd9, spark-role-> driver
Pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
Creation time: 2018-09-26T02:28:27Z
Service account name: default
Volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
Node name: master2
Start time: 2018-09-26T02:28:27Z
Container images: bigdata.registry.com:5000/insight/spark:2.3.2
Phase: Pending
Status: [ContainerStatus (containerID=null, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=, lastState=ContainerState (running=null, terminated=null, waiting=null, additionalProperties= {}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState (running=null, terminated=null, waiting=ContainerStateWaiting (message=null, reason=PodInitializing, additionalProperties= {}), additionalProperties= {}), additionalProperties= {})]
2018-09-26 10:28:57 INFO LoggingPodStatusWatcherImpl:54-State changed, new state:
Pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
Namespace: default
Labels: spark-app-selector-> spark-74d52904a3794e8986895a12322c5cd9, spark-role-> driver
Pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
Creation time: 2018-09-26T02:28:27Z
Service account name: default
Volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
Node name: master2
Start time: 2018-09-26T02:28:27Z
Container images: bigdata.registry.com:5000/insight/spark:2.3.2
Phase: Running
Status: [ContainerStatus (containerID=docker://3abe8f7ac19d2f52ed3ba84e32e076268ae0dfde83ff0a75b2359924d3bac412, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=docker-pullable://bigdata.registry.com:5000/insight/spark@sha256:0bfd1a27778f97a1ec620446b599d9f1fda882e8c3945a04ce8435356a40efe8, lastState=ContainerState (running=null, terminated=null, waiting=null, additionalProperties= {}), name=spark-kubernetes-driver, ready=true, restartCount=0, state=ContainerState (running=ContainerStateRunning (startedAt=Time (time=2018-09-26T02:28:57Z, additionalProperties= {}), additionalProperties= {}), terminated=null, waiting=null, additionalProperties= {}), additionalProperties= {})]
2018-09-26 10:29:05 INFO LoggingPodStatusWatcherImpl:54-State changed, new state:
Pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
Namespace: default
Labels: spark-app-selector-> spark-74d52904a3794e8986895a12322c5cd9, spark-role-> driver
Pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
Creation time: 2018-09-26T02:28:27Z
Service account name: default
Volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
Node name: master2
Start time: 2018-09-26T02:28:27Z
Container images: bigdata.registry.com:5000/insight/spark:2.3.2
Phase: Failed
Status: [ContainerStatus (containerID=docker://3abe8f7ac19d2f52ed3ba84e32e076268ae0dfde83ff0a75b2359924d3bac412, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=docker-pullable://bigdata.registry.com:5000/insight/spark@sha256:0bfd1a27778f97a1ec620446b599d9f1fda882e8c3945a04ce8435356a40efe8, lastState=ContainerState (running=null, terminated=null, waiting=null, additionalProperties= {}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState (running=null, terminated=ContainerStateTerminated (containerID=docker://3abe8f7ac19d2f52ed3ba84e32e076268ae0dfde83ff0a75b2359924d3bac412, exitCode=1, finishedAt=Time (time=2018-09-26T02:29:04Z, additionalProperties= {}), message=null, reason=Error, signal=null, startedAt=Time (time=2018-09-26T02:28:57Z, additionalProperties= {}) AdditionalProperties= {}), waiting=null, additionalProperties= {}), additionalProperties= {})]
2018-09-26 10:29:05 INFO LoggingPodStatusWatcherImpl:54-Container final statuses:
Container name: spark-kubernetes-driver
Container image: bigdata.registry.com:5000/insight/spark:2.3.2
Container state: Terminated
Exit code: 1
2018-09-26 10:29:05 INFO Client:54-Application spark-pi finished.
2018-09-26 10:29:05 INFO ShutdownHookManager:54-Shutdown hook called
2018-09-26 10:29:05 INFO ShutdownHookManager:54-Deleting directory / tmp/spark-53c85221-619e-41c6-8b94-80b950852b7e
Code submission:
Val args = Array (/ / 10.110.25.114 / / 10.221.129.20 "- master", "k8s:// http://10.221.129.20:8080","-deploy-mode "," cluster ","-name "," spark-pi ","-class "," org.apache.spark.examples.SparkPi ","-conf " "spark.kubernetes.container.image=bigdata.registry.com:5000/insight/spark:2.3.2", "- conf", "spark.kubernetes.container.image.pullPolicy=Always", "--conf", "spark.kubernetes.namespace=spark", "--conf", "spark.executor.instances=1", "--conf", "spark.kubernetes.authenticate.driver.serviceAccountName=spark" "http://10.221.129.22/spark/spark-examples_2.11-2.3.2.jar"," 1000 ") for (arg
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.