In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
Editor to share with you how to install and deploy monitoring in the Spinnaker production environment, I believe most people do not know much about it, so share this article for your reference. I hope you will learn a lot after reading this article. Let's learn about it together.
1. Architecture analysis
Halyard + Kubernetes + Redis + MySQL57 + S3
Redis: Gate 、 Orca 、 Clouddrive 、 Rosco 、 Igor 、 Fiat 、 Kayenta
S3:Front50 、 Kayenta
Data persistence
The default installation of Orca and Clouddriver uses redis to store data and converts it to SQL database storage.
The default installation of Front50 uses S3 persistent storage, which is converted to SQL database storage.
Use k8s external redis cluster.
two。 Preparatory work
A redis cluster with 6 nodes (3 masters + 3 slaves).
MySQL 5.7database.
Deploy Minio for S3 storage.
Download the Halyard container image.
Download the container image (Ali Cloud) required by the Spinnaker cluster.
Download the files required for the custom installation of BOM.
2.1 start the Halyard container
You can also use binary installation. It is best to run halyard on a node with a kubectl client configured. Because the k8s cluster account information is needed later.
Docker pull registry.cn-beijing.aliyuncs.com/spinnaker-cd/halyard:1.32.0 mkdir / root/.hal docker run-itd-- name halyard\-v / root/.hal:/home/spinnaker/.hal\-v / root/.kube:/home/spinnaker/.kube\ registry.cn-beijing.aliyuncs.com/spinnaker-cd/halyard:1.32.0 # # enter the container as root Modify the configuration file docker exec-it-u root halyard bash # # modify spinnaker.config.input.gcs.enabled = false. Vi / opt/halyard/config/halyard.yml spinnaker: artifacts: debian: https://dl.bintray.com/spinnaker-releases/debians docker: gcr.io/spinnaker-marketplace config: input: gcs: enabled: false writerEnabled: false bucket: halconfig # # needs to restart the container (if this command is not restarted, you need to exit the container and then docker restart halyard) hal shutdown # # start docker start halyard
2.2 download the required images
All images have been automatically synchronized to Ali Cloud Image Repository via GitHub Actions. Please download it directly. For convenience, registry.cn-beijing.aliyuncs.com/spinnaker-cd/ can directly run the script to download all images of the current version.
The bom file and the script for downloading the image are in this package. Download https://github.com/zeyangli/spinnaker-cd-install/actions.
# upload to the server (the node running the halyard container) scp 1.22.1-Image-Script.zip root@master.zy.com:/root unzip 1.22.1-Image-Script.zip cd 1.22.1 [root@master 1.22.1] # ls-a. .. .boms GetImages.sh tagfile.txt # # .boms needs to be placed in the .hal directory # # GetImages.sh image download script # # tagfile.txt image tag sh-x GetImages.sh chmod 777-R .hal / # wait for the image download to be completed (in this script ssh is secret-free)
Tagfile.txt
# # tagfile [root@master 1.22.1] # cat tagfile.txt echo:2.14.0-202008170018 clouddriver:6.11.0-20200818115831 deck:3.3.0-20200818132306 fiat:1.13.0-202008170018 front50:0.25.1-20200831095512 gate:1.18.1-20200825122721 igor:1.12.0-20200817200018 kayenta:0.17.0-202008170018 orca:2.16.0-202008170018 rosco:0.21.1-20200827112228
GetImages.sh
# # script #! / bin/bash echo REGISTRY = "gcr.io/spinnaker-marketplace" Tunable REGISTRY = "registry.cn-beijing.aliyuncs.com/spinnaker-cd" NODES= "node01.zy.com node02.zy.com" # # download image function GetImages () {echo-e "\ 033 [43 34m = GetImg=\ 033 [0m "IMAGES=$ (cat tagfile.txt) for image in ${IMAGES} do for node in ${NODES} do echo-e"\ 033 [32m ${node}-> pull-- > ${image}\ 033 [0m "ssh ${node}" docker pull ${T_REGISTRY} / ${image} "echo- E "033 [32m ${node}-- > tag-- > ${image}\ 033 [0m" ssh ${node} "docker tag ${T_REGISTRY} / ${image} ${S_REGISTRY} / ${image}" done done for node in ${NODES} do echo-e "\ 033 [43 34m = ${node} = = Image Information =\ 033 [0m "ssh ${node}" docker images | grep 'spinnaker-marketplace' "done} GetImages
2.3 prepare bom files
[root@master 1.22.1] # mv .boms / ~ / .hal/ [root@master 1.22.1] # cd ~ / .hal/ [root@master .hal] # cd .boms / [root@master .boms] # ls bom clouddriver deck echo fiat front50 gate igor kayenta orca rosco [root@master .boms] # tree. ├── bom │ ├── 1.19.4.yml │ └── 1.22.1.yml ├── clouddriver │ ├── 6.11.0-20200818115831 │ │ └── clouddriver.yml │ ├── 6.7.3-20200401190525 │ │ └── clouddriver.yml │ └── clouddriver.yml ├── deck 3.0.2-20200324040016 settings .js │ ├── 3.3.0-20200818132306 │ │ └── settings.js │ └── settings.js ├── echo │ ├── 2.11.2-20200401121252 │ │ └── echo.yml │ ├── 2.14.0-202008170018 │ │ └── echo.yml │ echo.yml fiat 1.10.1-202004011252 └── fiat.yml │ ├── 1.13.0-202008170018 │ │ └── fiat.yml │ └── fiat.yml front50 │ ├── 0.22.1-20200401121252 │ │ └── front50.yml │ ├── 0.25.1-20200831095512 │ │ └── front50.yml front50.yml gate 1.15.1 -20200403040016 │ │ └── gate.yml │ ├── 1.18.1-20200825122721 │ │ └── gate.yml │ └── gate.yml ├── igor │ ├── 1.12.0-20200817200018 │ │ └── igor.yml │ ├── 1.9.2-202004011252 │ igor.yml igor.yml kayenta ─ 0.14.0-20200304112817 │ │ └── kayenta.yml │ ├── 0.17.0-202008170018 │ │ └── kayenta.yml │ └── kayenta.yml ├── orca │ ├── 2.13.2-20200401144746 │ │ └── orca.yml │ ├── 2.16.0-202008170018 orca.yml orca.yml Rosco ├── 0.18.1-20200401121252 │ ├── images.yml │ ├── packer │ ├── alicloud.json │ ├── alicloud-multi.json │ │ ├── aws-chroot.json │ │ ├── aws-ebs.json │ │ aws-multi-chroot.json ├── aws-multi-ebs.json │ │ ├── aws-windows-2012-r2.json │ │ ├── azure-linux.json │ │ ├── azure-windows-2012-r2.json │ │ ├── docker.json │ │ ├── gce.json │ │ ├── huaweicloud.json Install_packages.sh │ │ ├── oci.json │ │ └── scripts │ │ ├── aws-windows-2012-configure-ec2service.ps1 │ │ ├── aws-windows.userdata │ │ ├── windows-configure-chocolatey.ps1 │ │ └── windows-install-packages.ps1 │ ─ rosco.yml ├── 0.21.1-20200827112228 │ ├── images.yml │ ├── packer │ │ ├── │ │ ├── alicloud-multi.json │ │ ├── aws-chroot.json │ │ ├── aws-ebs.json │ aws-multi-chroot.json │ ├── aws-multi-ebs.json │ │ ├── aws-windows-2012-r2.json │ │ ├── azure-linux.json │ │ ├── azure-windows-2012-r2.json │ │ ├── docker.json │ │ ├── gce.json │ │ huaweicloud.json ├── install_packages.sh │ │ ├── oci.json │ │ └── scripts │ │ ├── aws-windows-2012-configure-ec2service.ps1 │ │ ├── aws-windows.userdata │ │ ├── windows-configure-chocolatey.ps1 │ │ └── windows-install-packages.ps1 ├── README.md │ └── rosco.yml ├── images.yml ├── packer │ ├── alicloud.json │ ├── alicloud-multi.json │ ├── aws-chroot.json │ ├── aws-ebs.json │ ├── aws-multi-chroot.json │ ├── aws-multi-ebs.json │ ├── Aws-windows-2012-r2.json │ ├── azure-linux.json │ ├── azure-windows-2012-r2.json │ ├── docker.json │ ├── gce.json │ ├── huaweicloud.json │ ├── install_packages.sh │ ├── oci.json │ └── scripts │ ├── aws-windows-2012 -configure-ec2service.ps1 │ ├── aws-windows.userdata │ ├── windows-configure-chocolatey.ps1 │ └── windows-install-packages.ps1 ├── README.md rosco.yml 37 directories 91 files
3.Halyard configuration management
Docker exec-it halyard bash
Halyard initialization configuration
Add Image Warehouse (Harbor) and K8s cluster account
Enable feature functions (pipeline-templates, artifacts, managed-pipeline-templates-v2-ui)
Configure JenkinsCI Integration
Configure GitHub/GitLab Integration
3.1.Halyard initialization configuration
# set Spinnaker version,-- version specify version hal config version edit-- version local:1.22.1 # set time zone hal config edit-- timezone Asia/Shanghai # set and store as S3 (not used later But you must configure bug) hal config storage edit-- type S3-- no-validate # access method: set the domain name hal config security ui edit-- override-base-url http://spinnaker.idevops.site hal config security api edit-- override-base-url http://spin-gate.idevops.site of deck and gate
3.2 add Image Warehouse (harbor) and k8s cluster account
Hal config provider docker-registry enable-- no-validate hal config provider docker-registry account add my-harbor-registry\-- address http://192.168.1.200:8088\-- username admin\-- password Harbor12345 hal config provider kubernetes enable hal config provider kubernetes account add default\-- docker-registries my-harbor-registry\-- context $(kubectl config current-context)\-service-account true\-- omit-namespaces=kube-system Kube-public\-- provider-version v2\-- no-validate # deployment method Distributed deployment, namespaces. Hal config deploy edit\-account-name default\-type distributed\-location spinnaker
3.3 enable feature function
# # enable some major features (which can be added later) hal config features edit-pipeline-templates true hal config features edit-artifacts true hal config features edit-managed-pipeline-templates-v2-ui true
3.4 configure JenkinsCI integration
# configure Jenkins hal config ci jenkins enable # JenkinsServer requires account and password hal config ci jenkins master add my-jenkins-master-01\-- address http://jenkins.idevops.site\-- username admin\-- password admin # enable csrf hal config ci jenkins master edit my-jenkins-master-01-- csrf true
3. 5 configure GitHub/GitLab Integration
# GitHub # # reference: https://spinnaker.io/setup/artifacts/github/ # # create token https://github.com/settings/tokens hal config artifact github enable hal config artifact github account add my-github-account\-- token 02eb8aa1c2cd67af305d1f606\-- username zey # GitLab # # https://spinnaker.io/setup/artifacts/gitlab/ # # create a personal token (admin) hal config artifact gitlab enable hal config artifact gitlab account add my-gitlab-account\-- token qqHX8T4VTpozbnX
4. Use an external Redis cluster
# # service-settings mkdir. Hal/default/service-settings/ vi. Hal/default/service-settings/redis.yml overrideBaseUrl: redis://192.168.1.200:6379 skipLifeCycleManagement: true # # profiles # # / root/.hal/default/profiles [root@master profiles] # ls [root@master profiles] # vi gate-local.yml redis: configuration: secure: true
5. Using the SQL database
5.1 Clouddriver Service
Create a database
CREATE DATABASE `clouddriver` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, EXECUTE, SHOW VIEW ON `clouddriver`. * TO 'clouddriver_service'@'%' IDENTIFIED BY' clouddriver@spinnaker.com'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, LOCK TABLES, EXECUTE, SHOW VIEW ON `clouddriver`. * TO 'clouddriver_migrate'@'%' IDENTIFIED BY' clouddriver@spinnaker.com'
Modify the configuration file
# # / root/.hal/default/profiles bash-5.0$ cat clouddriver-local.yml sql: enabled: true # read-only boolean toggles `SELECT` or `DELETE` health checks for all pools. # Especially relevant for clouddriver-ro and clouddriver-ro-deck which can # target a SQL read replica in their default pools. Read-only: false taskRepository: enabled: true cache: enabled: true # These parameters were determined to be optimal via benchmark comparisons # in the Netflix production environment with Aurora. Setting these too low # or high may negatively impact performance. These values may be sub-optimal # in some environments. ReadBatchSize: 500 writeBatchSize: 300 scheduler: enabled: true # Enable clouddriver-caching's clean up agent to periodically purge old # clusters and accounts. Set to true when using the Kubernetes provider. Unknown-agent-cleanup-agent: enabled: false connectionPools: default: # additional connection pool parameters are available here, # for more detail and to view defaults See: # https://github.com/spinnaker/kork/blob/master/kork-sql/src/main/kotlin/com/netflix/spinnaker/kork/sql/config/ConnectionPoolProperties.kt default: true jdbcUrl: jdbc:mysql://192.168.1.200:3306/clouddriver user: clouddriver_service password: clouddriver@spinnaker.com # The following tasks connection pool is optional. At Netflix, clouddriver # instances pointed to Aurora read replicas have a tasks pool pointed at the # master. Instances where the default pool is pointed to the master omit a # separate tasks pool. Tasks: user: clouddriver_service jdbcUrl: jdbc:mysql://192.168.1.200:3306/clouddriver password: clouddriver@spinnaker.com migration: user: clouddriver_migrate jdbcUrl: jdbc:mysql://192.168.1.200:3306/clouddriver password: clouddriver@spinnaker.com redis: enabled: false cache: enabled: false scheduler: enabled: false taskRepository: enabled: false
5.2 Front50 Services
Create a database
CREATE DATABASE `front50` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, EXECUTE, SHOW VIEW ON `front50`. * TO 'front50_service'@'%' IDENTIFIED BY "front50@spinnaker.com"; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, LOCK TABLES, EXECUTE, SHOW VIEW ON `front50`. * TO' front50_migrate'@'%' IDENTIFIED BY "front50@spinnaker.com"
Modify the configuration file
# # / root/.hal/default/profiles bash-5.0$ cat front50-local.yml spinnaker: S3: enabled: false sql: enabled: true connectionPools: default: # additional connection pool parameters are available here, # for more detail and to view defaults See: # https://github.com/spinnaker/kork/blob/master/kork-sql/src/main/kotlin/com/netflix/spinnaker/kork/sql/config/ConnectionPoolProperties.kt default: true jdbcUrl: jdbc:mysql://192.168.1.200:3306/front50 user: front50_service password: front50@spinnaker.com migration: user: front50_migrate jdbcUrl: jdbc:mysql://192.168.1. 200:3306/front50 password: front50@spinnaker.com
5.3 Orca Services
Create a database
Set tx_isolation = 'REPEATABLE-READ'; CREATE SCHEMA `orca` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, EXECUTE, SHOW VIEW ON `orca`. * TO' orca_service'@'%' IDENTIFIED BY "orca@spinnaker.com"; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, LOCK TABLES, EXECUTE, SHOW VIEW ON `orca`. * TO 'orca_migrate'@'%' IDENTIFIED BY "orca@spinnaker.com"
Modify the configuration file
# # / root/.hal/default/profiles bash-5.0$ cat orca-local.yml sql: enabled: true connectionPool: jdbcUrl: jdbc:mysql://192.168.1.200:3306/orca user: orca_service password: orca@spinnaker.com connectionTimeout: 5000 maxLifetime: 30000 # MariaDB-specific: maxPoolSize: 50 migration: jdbcUrl: jdbc:mysql://192.168.1.200:3306/orca user: orca _ migrate password: orca@spinnaker.com # Ensure we're only using SQL for accessing execution state executionRepository: sql: enabled: true redis: enabled: false # Reporting on active execution metrics will be handled by SQL monitor: activeExecutions: redis: false # Use SQL for Orca's work queue # Settings from Netflix and may require adjustment for your environment # Only validated with AWS Aurora MySQL 5.7 # Please PR if you have success with other databases keiko: queue: sql: enabled: true Redis: enabled: false queue: zombieCheck: enabled: true pendingExecutionService: sql: enabled: true redis: enabled: false
6. Deployment
Hal deploy apply-no-validate
Create Ingress access
ApiVersion: extensions/v1beta1
Kind: Ingress
Metadata:
Name: spinnaker-service
Annotations:
Kubernetes.io/ingress.class: nginx
Spec:
Rules:
-host: spinnaker.idevops.site
Http:
Paths:
-path: /
Backend:
ServiceName: spin-deck
ServicePort: 9000
-host: spin-gate.idevops.site
Http:
Paths:
-path: /
Backend:
ServiceName: spin-gate
ServicePort: 8084
-host: spin-front50.idevops.site
Http:
Paths:
-path: /
Backend:
ServiceName: spin-front50
ServicePort: 8080
-host: spin-fiat.idevops.site
Http:
Paths:
-path: /
Backend:
ServiceName: spin-fiat
ServicePort: 7003
Kubectl create-f ingress.yml
7. Other settin
7.1 Authentication and Authorization
Certification: LDAP, Oauth3
Authorization: LDAP, File
Enable authentication LDAP/OAuth3 (you can choose one of the two. LDAP is recommended)
# enable LDAP authentication hal config security authn ldap edit\-- user-search-base 'ou=devops,dc=zy,dc=com'\-- url' ldap://192.168.1.200:389'\-- user-search-filter 'cn= {0}'\-- manager-dn 'cn=admin,dc=zy Dc=com'\-- manager-password '12345678' hal config security authn ldap enable # #-- part of user-search-base user search #-- url LDAP Server # #-- filter used by user-search-filter to search for user DN #-- manager-dn LDAP Manager user #-- password of manager-password LDAP Manager user # GitHub # # first required To log in to GitHub and create an OAuth APP. # # reference official: https://spinnaker.io/setup/security/authentication/oauth/github/ hal config security authn oauth3 edit-- provider github\-- client-id 66826xxxxxxxxe0ecdbd7\-- client-secret d834851134e80a9xxxxxxe371613f05bc26 hal config security authn oauth3 enable
Authorization management
Roles can be customized through LDAP or using files. Choose one of the two.
Roles are defined by LDAP groups: for example, I have a group yunweizu of type groupOfUniqueName in LDAP. The role of all users associated with this group is yunweizu. Subsequent permissions are added according to yunweizu authorization.
File customization: write a static yaml file that defines each user and their corresponding roles.
# use Yaml file # # configure user1 to yunweizu and user2 to demo as follows. Users:-username: devops roles:-yunweizu-username: user2 roles:-demo hal config security authz enable hal config security authz file edit-- file-path=$HOME/.hal/userrole.yaml hal config security authz edit-- type file # # Authorization (based on LDAP group authorization) hal config security authz ldap edit\-- url 'ldap://192.168.1.200:389/dc=zy,dc=com'\-- manager-dn' cn=admin,dc=zy Dc=com'\-- manager-password '12345678'\-user-dn-pattern 'cn= {0}'\-- group-search-base 'ou=devops'\-- group-search-filter' uniqueMember= {0}'\-- group-role-attributes' cn'\-- user-search-filter 'cn= {0}' hal config security authz edit-- type ldap hal config security authz enable
After enabling authorization, you can set which users can access cluster accounts, image repositories, and applications.
# # users who configure yunweizu and group02 roles can use the default cluster account hal config provider kubernetes account edit default\-- add-read-permission yunweizu,group02\-- add-write-permission yunweizu # users who configure yunweizu roles can use the my-harbor-registry account hal config provider docker-registry account edit my-harbor-registry\-- read-permissions yunweizu\-- write-permissions yunweizu
Open pipe permissions
~ / .hal/default/profiles/orca-local.yml tasks: useManagedServiceAccounts: true ~ / .hal/default/profiles/settings-local.js window.spinnakerSettings.feature.managedServiceAccounts = true
Define Super Admin
Vi ~ / .hal/default/profiles/fiat-local.yml bash-5.0$ cat fiat-local.yml fiat: admin: roles:-devops-admin # # specified group
7.2 Mail Notification
. hal/default/profiles/echo-local.yml
[root@master profiles] # cat echo-local.yml mail: enabled: true from: 250642@qq.com spring: mail: host: smtp.qq.com username: 25642@qq.com password: ubxijwaah protocol: smtp default-encoding: utf-8 properties: mail: display: sendname: SpinnakerAdmin smtp: port: 465 auth: true starttls : enable: true required: true ssl: enable: true transport: protocol: smtp debug: true
. hal/default/profiles/settings-local.js
Window.spinnakerSettings.notifications.email.enabled = true
Update configuration
Hal deploy apply-no-validate
7.3 Canary analysis
Configure Stora
Hal config canary enable # # aws S3 minio creates a bucket spinnaker-canary that gives read and write permissions. Hal config canary aws enable hal config canary aws account add my-canary\-bucket spinnaker-canary\-endpoint http://minio.idevops.site\-access-key-id AKIAIOSFODNN7EXAMPLE\-secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY hal config canary edit-default-storage-account my-canary hal config canary aws edit-s3-enabled true
Prometheus integration
# # prometheus
Hal config canary prometheus enable
# # basic authentication is done here, and username and password options are ignored without authentication.
Hal config canary prometheus account add my-prometheus\
-- base-url http://prometheus.idevops.site\
-- username admin\
-- password admin
Hal config canary edit-default-metrics-account my-prometheus
Hal config canary edit-default-metrics-store prometheus
Hal deploy apply-no-validate
Effect.
7.4 Monitoring Spinnaker
Hal config metric-stores prometheus enable hal deploy apply-no-validate [root@master monitor] # kubectl get pod-n spinnaker NAME READY STATUS RESTARTS AGE spin-clouddriver-7cd94f5b9-cn22r 2 Running 2 4h5m spin-deck-684854fbd7-cb7wh 1 Running 1 4h5m spin-echo-746b45ff98-kcz5m 2 Running 2 4h5m spin-front50-66b4f9966-l6r4h 2 5q8sp 2 Running 2 4h5m spin-gate-6788588dfc-q8cpt 2 Running 2 4h5m spin-igor-6f6fbbbb75-4b4jd 2 Running 2 4h5m spin-kayenta-64fddf7db9-j4pqg 2 Running 2 4h5m spin-orca-d5c488b48-5q8sp 2 Running 2 4h5m spin- Rosco-5f4bcb754c-9kgl9 2 Running 2 4h5m # you can see that there is a sidecar container monitoring-daemon kubectl describe pod spin-gate-6788588dfc-q8cpt-n spinnaker in POD through describe
To obtain measurement data through podID:8008/prometheus_metrics after normal operation, you need to add the following service discovery configuration.
# prometheus needs to add configuration-job_name: 'spinnaker-services' kubernetes_sd_configs:-role: pod metrics_path: "/ prometheus_metrics" relabel_configs:-source_labels: [_ meta_kubernetes_pod_label_app] action: keep regex:' spin'-source_labels: [_ meta_kubernetes_pod_container_name] action: keep regex: 'monitoring-daemon' # # prometheus-operator is configured as follows Other ways to ignore the following configuration. ApiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: spinnaker-all-metrics labels: app: spin # this label is here to match the prometheus operator serviceMonitorSelector attribute # prometheus.prometheusSpec.serviceMonitorSelector # https://github.com/helm/charts/tree/master/stable/prometheus-operator release: prometheus-operator spec: selector: matchLabels: app: spin namespaceSelector: any: true endpoints: # "port" is string only. "targetPort" is integer or string. -targetPort: 8008 interval: 10s path: "/ prometheus_metrics"
Open the prometheus page and you can see the following information.
Interfacing with Grafana to display data, Spinnaker officially provides a console template. Https://github.com/spinnaker/spinnaker-monitoring/tree/master/spinnaker-monitoring-third-party/third_party/prometheus
Open the Grafana console and start importing the json template. There are more templates, create a folder to manage.
These are all the contents of the article "how to install deployment Monitoring in Spinnaker production Environment". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.