Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Ansible Writing hadoop Cluster

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Ansible practice: deployment of distributed logging system

This section contains:

Background

Architecture diagram of distributed log system

Create and use roles

JDK 7 role

JDK 8 role

Zookeeper role

Kafka role

Elasticsearch role

MySQL role

Nginx role

Redis role

Hadoop role

Spark role

I. background

The product group is developing a distributed log system, which uses many components, and it is more tedious to deploy each software manually, and it takes a long time, so they think of using ansible playbook + roles for deployment, and the efficiency is greatly improved.

2. Architecture diagram of distributed log system

III. Create and use roles

Each software or cluster creates a separate role.

[root@node1 ~] # mkdir-pv ansible_playbooks/roles/ {db_server,web_server,redis_server,zk_server,kafka_server,es_server,tomcat_server,flume_agent,hadoop,spark,hbase,hive,jdk7,jdk8} / {tasks,files,templates,meta,handlers,vars}

3.1. JDK7 role [root@node1 jdk7] # pwd/root/ansible_playbooks/roles/ jdk7 [root @ node1 jdk7] # lsfiles handlers meta tasks templates vars

1. Upload software package

Upload jdk-7u80-linux-x64.gz to the files directory.

two。 Write tasks

[root@node1 jdk7] # vim tasks/main.yml-name: mkdir necessary catalog file: path=/usr/java state=directory mode=0755- name: copy and unzip jdk unarchive: src= {{jdk_package_name}} dest=/usr/java/- name: set env lineinfile: dest= { {env_file}} insertafter= "{{item.position}}" line= "{{item.value}}" state=present with_items:-{position: EOF Value: "\ n"}-{position: EOF, value: "export JAVA_HOME=/usr/java/ {{jdk_version}}"}-{position: EOF, value: "export PATH=$JAVA_HOME/bin:$PATH"}-{position: EOF, value: "export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar"}-name: enforce env shell: source {{env_file}}

3. Write vars

[root@node1 jdk7] # vim vars/main.yml jdk_package_name: jdk-7u80-linux-x64.gz env_file: / etc/profilejdk_version: jdk1.7.0_80

4. Working with roles

In the roles sibling directory, create a jdk.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim jdk.yml-hosts: jdk remote_user: root roles:-jdk7

Run playbook to install JDK7:

[root@node1 ansible_playbooks] # ansible-playbook jdk.yml

Using jdk7 role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.2 JDK8 role [root@node1 jdk8] # pwd/root/ansible_playbooks/roles/ jdk8 [root @ node1 jdk8] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Upload jdk-8u73-linux-x64.gz to the files directory.

two。 Write tasks

[root@node1 jdk8] # vim tasks/main.yml-name: mkdir necessary catalog file: path=/usr/java state=directory mode=0755- name: copy and unzip jdk unarchive: src= {{jdk_package_name}} dest=/usr/java/- name: set env lineinfile: dest= { {env_file}} insertafter= "{{item.position}}" line= "{{item.value}}" state=present with_items:-{position: EOF Value: "\ n"}-{position: EOF, value: "export JAVA_HOME=/usr/java/ {{jdk_version}}"}-{position: EOF, value: "export PATH=$JAVA_HOME/bin:$PATH"}-{position: EOF, value: "export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar"}-name: enforce env shell: source {{env_file}}

3. Write vars

[root@node1 jdk8] # vim vars/main.yml jdk_package_name: jdk-8u73-linux-x64.gz env_file: / etc/profilejdk_version: jdk1.8.0_73

4. Working with roles

In the roles sibling directory, create a jdk.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim jdk.yml-hosts: jdk remote_user: root roles:-jdk8

Run playbook to install JDK8:

[root@node1 ansible_playbooks] # ansible-playbook jdk.yml

Using jdk8 role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.3 Zookeeper role

Zookeeper cluster nodes are configured with / etc/hosts file, and the corresponding relationship between hostname and ip address of each node in the cluster is configured.

[root@node1 zk_server] # pwd/root/ansible_playbooks/roles/zk_ server [root @ node1 zk_server] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Upload zookeeper-3.4.6.tar.gz and clean_zklog.sh to the files directory. Clean_zklog.sh is a script that cleans up Zookeeper logs.

two。 Write tasks

Zookeeper tasks

3. Write templates

Upload the default configuration file in the zookeeper-3.4.6.tar.gz package to the.. / roles/zk_server/templates/ directory, rename it to zoo.cfg.j2, and modify its contents.

[root@node1 ansible_playbooks] # vim roles/zk_server/templates/zoo.cfg.j2

There are too many configuration files. For more information, please see github. The address is https://github.com/jkzhao/ansible-godseye. The contents of the configuration file are not explained and have been written in previous blog posts.

4. Write vars

[root@node1 zk_server] # vim vars/main.yml server1_hostname: hadoop27 server2_hostname: hadoop28server3_hostname: hadoop29

In addition, a variable {{myid}} is used in tasks. The value of this variable varies from host to host, so it is defined in the / etc/ansible/hosts file:

[zk_servers] 172.16.206.27 myid=1172.16.206.28 myid=2172.16.206.29 myid=3

5. Set up host group

/ etc/ansible/hosts file:

[zk_servers] 172.16.206.27 myid=1172.16.206.28 myid=2172.16.206.29 myid=3

6. Working with roles

In the roles sibling directory, create a zk.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim zk.yml-hosts: zk_servers remote_user: root roles:-zk_server

Run playbook to install the Zookeeper cluster:

[root@node1 ansible_playbooks] # ansible-playbook zk.yml

To use zk_server role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.4 Kafka role [root@node1 kafka_server] # pwd/root/ansible_playbooks/roles/kafka_ server [root @ node1 kafka_server] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Upload kafka_2.11-0.9.0.1.tar.gz, kafka-manager-1.3.0.6.zip, and clean_kafkalog.sh to the files directory. Clean_kafkalog.sh is a script that cleans up kafka logs.

two。 Write tasks

Kafka tasks

3. Write templates

[root@node1 kafka_server] # vim templates/server.properties.j2

There are too many configuration files. For more information, please see github. The address is https://github.com/jkzhao/ansible-godseye. The contents of the configuration file are no longer explained and have been written in previous blog posts.

4. Write vars

[root@node1 kafka_server] # vim vars/main.ymlzk_cluster: 172.16.7.151:2181172.16.7.152:2181172.16.7.153:2181kafka_manager_ip: 172.16.7.151

In addition, a variable {{broker_id}} is used in the template file. The value of this variable varies from host to host, so it is defined in the / etc/ansible/hosts file:

[kafka_servers] 172.16.206.17 broker_id=0172.16.206.31 broker_id=1172.16.206.32 broker_id=2

5. Set up host group

/ etc/ansible/hosts file:

[kafka_servers] 172.16.206.17 broker_id=0172.16.206.31 broker_id=1172.16.206.32 broker_id=2

6. Working with roles

In the roles sibling directory, create a kafka.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim kafka.yml-hosts: kafka_servers remote_user: root roles:-kafka_server

Run playbook to install the kafka cluster:

[root@node1 ansible_playbooks] # ansible-playbook kafka.yml

To use kafka_server role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.5 Elasticsearch role [root@node1 es_server] # pwd/root/ansible_playbooks/roles/es_ server [root @ node1 es_server] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Upload elasticsearch-2.3.3.tar.gz elasticsearch-analysis-ik-1.9.3.zip to the files directory.

two。 Write tasks

Elasticsearch tasks

3. Write templates

Put the template elasticsearch.in.sh.j2 and elasticsearch.yml.j2 in the templates directory

Note that the variable name in the template cannot be used in the middle. For example, a variable name like {{node.name}} is illegal.

There are too many configuration files. For more information, please see github. The address is https://github.com/jkzhao/ansible-godseye. The contents of the configuration file are no longer explained and have been written in previous blog posts.

4. Write vars

[root@node1 es_server] # vim vars/main.ymlES_MEM: 2gcluster_name: wisedumaster_ip: 172.16.7.151

In addition, a variable {{node_master}} is used in the template file. The value of this variable varies from host to host, so it is defined in the / etc/ansible/hosts file:

[es_servers] 172.16.7.151 node_master=true172.16.7.152 node_master=false172.16.7.153 node_master=false

5. Set up host group

/ etc/ansible/hosts file:

[es_servers] 172.16.7.151 node_master=true172.16.7.152 node_master=false172.16.7.153 node_master=false

6. Working with roles

In the roles sibling directory, create an es.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim es.yml-hosts: es_servers remote_user: root roles:-es_server

Run playbook to install the Elasticsearch cluster:

[root@node1 ansible_playbooks] # ansible-playbook es.yml

To use es_server role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.6 MySQL role [root@node1 db_server] # pwd/root/ansible_playbooks/roles/db_ server [root @ node1 db_server] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Put the finished rpm package mysql-5.6.27-1.x86_64.rpm into the / root/ansible_playbooks/roles/db_server/files/ directory.

[note]: this rpm package is packaged and made by yourself, and packaging into rpm will improve the efficiency of deployment. See the previous blog "Quick RPM package making" on how to package it into rpm.

two。 Write tasks

Mysql tasks

3. Set up host group

# vim / etc/ansible/hosts [db_servers] 172.16.7.152

4. Working with roles

In the roles sibling directory, create a db.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim db.yml-hosts: mysql_server remote_user: root roles:-db_server

Run playbook to install MySQL:

[root@node1 ansible_playbooks] # ansible-playbook db.yml

To use db_server role, you need to modify the host defined in the / etc/ansible/hosts file according to the actual environment.

3.7 Nginx role [root@node1 web_server] # pwd/root/ansible_playbooks/roles/web_ server [root @ node1 web_server] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Put the finished rpm package openresty-for-godseye-1.9.7.3-1.x86_64.rpm into the / root/ansible_playbooks/roles/web_server/files/ directory.

[note]: by making the rpm package, you can save the process of compiling nginx during installation and improve the efficiency of deployment. This bag contains a lot of files related to our system.

two。 Write tasks

Nginx tasks

3. Write templates

Put the template nginx.conf.j2 in the templates directory.

There are too many configuration files. For more information, please see github. The address is https://github.com/jkzhao/ansible-godseye. The contents of the configuration file are no longer explained and have been written in previous blog posts.

4. Write vars

[root@node1 web_server] # vim vars/main.yml elasticsearch_cluster: server 172.16.7.151 server 9200 potential server 172.16.7.152 172.16.7.151kafka_server2: 172.16.7.152kafka_server3: 172.16.7.153

After testing, there can be no commas in variables.

5. Set up host group

/ etc/ansible/hosts file:

# vim / etc/ansible/hosts [nginx_servers] 172.16.7.153

6. Working with roles

In the roles sibling directory, create a nginx.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim nginx.yml-hosts: nginx_servers remote_user: root roles:-web_server

Run playbook to install Nginx:

[root@node1 ansible_playbooks] # ansible-playbook nginx.yml

To use web_server role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.8 Redis role [root@node1 redis_server] # pwd/root/ansible_playbooks/roles/redis_ server [root @ node1 redis_server] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Put the finished rpm package redis-3.2.2-1.x86_64.rpm into the / root/ansible_playbooks/roles/redis_server/files/ directory.

two。 Write tasks

Redis tasks

3. Set up host group

/ etc/ansible/hosts file:

# vim / etc/ansible/hosts [redis_servers] 172.16.7.152

4. Working with roles

In the roles sibling directory, create a redis.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim redis.yml-hosts: redis_servers remote_user: root roles:-redis_server

Run playbook to install redis:

[root@node1 ansible_playbooks] # ansible-playbook redis.yml

To use redis_server role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.9 Hadoop role

Fully distributed cluster deployment, NameNode and ResourceManager are highly available.

Configure the / etc/hosts file of the cluster node in advance, node time synchronization, and some cluster master nodes do not need to enter a password to log on to other nodes.

[root@node1 hadoop] # pwd/root/ansible_playbooks/roles/ Hadoop [root @ node1 hadoop] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Put hadoop-2.7.2.tar.gz in the / root/ansible_playbooks/roles/hadoop/files/ directory.

two。 Write tasks

-name: install dependency package yum: name= {{item}} state=present with_items:-openssh-rsync- name: create hadoop user user: name=hadoop password= {{password}} vars: # created with: # python-c 'import crypt; print crypt.crypt ("This is my Password", "$1$ SomeSalt$")' > import crypt # > > crypt.crypt ('wisedu123',' $1$ bigrandomsalt$') #'$1$ bigrando$wzfZ2ifoHJPvaMuAelsBq0' password: $1$ bigrando$wzfZ2ifoHJPvaMuAelsBq0- name: copy and unzip hadoop # unarchive module owner and group only effect on directory. Unarchive: src=hadoop-2.7.2.tar.gz dest=/usr/local/- name: create hadoop soft link file: src=/usr/local/hadoop-2.7.2 dest=/usr/local/hadoop state=link- name: create hadoop logs directory file: dest=/usr/local/hadoop/logs mode=0775 state=directory- name: change hadoop soft link owner and group # recurse=yes make all files in a directory changed. File: path=/usr/local/hadoop owner=hadoop group=hadoop recurse=yes- name: change hadoop-2.7.2 directory owner and group # recurse=yes make all files in a directory changed. File: path=/usr/local/hadoop-2.7.2 owner=hadoop group=hadoop recurse=yes- name: set hadoop env lineinfile: dest= {{env_file}} insertafter= "{{item.position}}" line= "{{item.value}}" state=present with_items:-{position: EOF, value: "\ n"}-{position: EOF, value: "# Hadoop environment"}-{position: EOF, value: "export HADOOP_HOME=/usr/local/hadoop"}-{position: EOF Value: "export PATH=$PATH:$ {HADOOP_HOME} / bin:$ {HADOOP_HOME} / sbin"}-name: enforce env shell: source {{env_file}}-name: install configuration file hadoop-env.sh.j2 for hadoop template: src=hadoop-env.sh.j2 dest=/usr/local/hadoop/etc/hadoop/hadoop-env.sh owner=hadoop group=hadoop- name: install configuration file core-site.xml.j2 for hadoop template: src=core-site.xml.j2 dest=/usr/local/hadoop/etc/hadoop/ Core-site.xml owner=hadoop group=hadoop- name: install configuration file hdfs-site.xml.j2 for hadoop template: src=hdfs-site.xml.j2 dest=/usr/local/hadoop/etc/hadoop/hdfs-site.xml owner=hadoop group=hadoop- name: install configuration file mapred-site.xml.j2 for hadoop template: src=mapred-site.xml.j2 dest=/usr/local/hadoop/etc/hadoop/mapred-site.xml owner=hadoop group=hadoop- name: install configuration file yarn-site.xml.j2 for hadoop template: src=yarn-site.xml.j2 dest=/usr / local/hadoop/etc/hadoop/yarn-site.xml owner=hadoop group=hadoop- name: install configuration file slaves.j2 for hadoop template: src=slaves.j2 dest=/usr/local/hadoop/etc/hadoop/slaves owner=hadoop group=hadoop- name: install configuration file hadoop-daemon.sh.j2 for hadoop template: src=hadoop-daemon.sh.j2 dest=/usr/local/hadoop/sbin/hadoop-daemon.sh owner=hadoop group=hadoop- name: install configuration file yarn-daemon.sh.j2 for hadoop template: src=yarn-daemon.sh.j2 dest=/usr/local/hadoop/ Sbin/yarn-daemon.sh owner=hadoop group=hadoop# make sure zookeeper started And then start hadoop.# start journalnode- name: start journalnode shell: / usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode become: true become_method: su become_user: hadoop when: datanode = = "true" # format namenode- name: format active namenode hdfs shell: / usr/local/hadoop/bin/hdfs namenode- format become: true become_method: su become_user: hadoop when: namenode_active = = "true"-name: start active namenode hdfs shell: / usr/local/hadoop/sbin/hadoop -daemon.sh start namenode become: true become_method: su become_user: hadoop when: namenode_active = = "true"-name: format standby namenode hdfs shell: / usr/local/hadoop/bin/hdfs namenode-bootstrapStandby become: true become_method: su become_user: hadoop when: namenode_standby = = "true"-name: stop active namenode hdfs shell: / usr/local/hadoop/sbin/hadoop-daemon.sh stop namenode become: true become_method: su become_user: hadoop when: namenode_active = "true "# format ZKFC- name: format ZKFC shell: / usr/local/hadoop/bin/hdfs zkfc-formatZK become: true become_method: su become_user: hadoop when: namenode_active = =" true "# start hadoop cluster- name: start namenode shell: / usr/local/hadoop/sbin/start-dfs.sh become: true become_method: su become_user: hadoop when: namenode_active = =" true "- name: start yarn shell: / usr/local/hadoop/sbin/start-yarn.sh become: true Become_method: su become_user: hadoop when: namenode_active = = "true"-name: start standby rm shell: / usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager become: true become_method: su become_user: hadoop when: namenode_standby = "true"

3. Write templates

Put the templates core-site.xml.j2, hadoop-daemon.sh.j2, hadoop-env.sh.j2, hdfs-site.xml.j2, mapred-site.xml.j2, slaves.j2, yarn-daemon.sh.j2, yarn-site.xml.j2 in the templates directory.

There are too many configuration files. For more information, please see github. The address is https://github.com/jkzhao/ansible-godseye. The contents of the configuration file are no longer explained and have been written in previous blog posts.

4. Write vars

[root@node1 hadoop] # vim vars/main.yml env_file: / etc/profile# hadoop-env.sh.j2 file variables.JAVA_HOME: / usr/java/jdk1.8.0_73# core-site.xml.j2 file variables.ZK_NODE1: node1:2181ZK_NODE2: node2:2181ZK_NODE3: node3:2181# hdfs-site.xml.j2 file variables.NAMENODE1_HOSTNAME: node1NAMENODE2_HOSTNAME: node2DATANODE1_HOSTNAME: node3DATANODE2_HOSTNAME: node4DATANODE3_HOSTNAME: node5# mapred-site.xml.j2 File variables.MR_MODE: yarn# yarn-site.xml.j2 file variables.RM1_HOSTNAME: node1RM2_HOSTNAME: node2

5. Set up host group

/ etc/ansible/hosts file:

# vim / etc/ansible/hosts [hadoop] 172.16.7.151 namenode_active=true namenode_standby=false datanode=false172.16.7.152 namenode_active=false namenode_standby=true datanode=false172.16.7.153 namenode_active=false namenode_standby=false datanode=true172.16.7.154 namenode_active=false namenode_standby=false datanode=true172.16.7.155 namenode_active=false namenode_standby=false datanode=true

6. Working with roles

In the roles sibling directory, create a hadoop.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim hadoop.yml-hosts: hadoop remote_user: root roles:-jdk8-hadoop

Run playbook to install the hadoop cluster:

[root@node1 ansible_playbooks] # ansible-playbook hadoop.yml

To use hadoop role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

3.10 Spark role

Standalone mode deployment spark (no HA)

[root@node1 spark] # pwd/root/ansible_playbooks/roles/ Spark [root @ node1 spark] # lsfiles handlers meta tasks templates vars

1. Upload installation package

Put scala-2.10.6.tgz and spark-1.6.1-bin-hadoop2.6.tgz in the / root/ansible_playbooks/roles/hadoop/files/ directory.

two。 Write tasks

-name: copy and unzip scala

Unarchive: src=scala-2.10.6.tgz dest=/usr/local/

-name: set scala env

Lineinfile: dest= {{env_file}} insertafter= "{{item.position}}" line= "{{item.value}}" state=present

With_items:

-{position: EOF, value: "\ n"}

-{position: EOF, value: "# Scala environment"}

-{position: EOF, value: "export SCALA_HOME=/usr/local/scala-2.10.6"}

-{position: EOF, value: "export PATH=$SCALA_HOME/bin:$PATH"}

-name: copy and unzip spark

Unarchive: src=spark-1.6.1-bin-hadoop2.6.tgz dest=/usr/local/

-name: rename spark directory

Command: mv / usr/local/spark-1.6.1-bin-hadoop2.6 / usr/local/spark-1.6.1

-name: set spark env

Lineinfile: dest= {{env_file}} insertafter= "{{item.position}}" line= "{{item.value}}" state=present

With_items:

-{position: EOF, value: "\ n"}

-{position: EOF, value: "# Spark environment"}

-{position: EOF, value: "export SPARK_HOME=/usr/local/spark-1.6.1"}

-{position: EOF, value: "export PATH=$SPARK_HOME/bin:$PATH"}

-name: enforce env

Shell: source {{env_file}}

-name: install configuration file for spark

Template: src=slaves.j2 dest=/usr/local/spark-1.6.1/conf/slaves

-name: install configuration file for spark

Template: src=spark-env.sh.j2 dest=/usr/local/spark-1.6.1/conf/spark-env.sh

-name: start spark cluster

Shell: / usr/local/spark-1.6.1/sbin/start-all.sh

Tags:

-start

Spark tasks

3. Write templates

Put the templates slaves.j2 and spark-env.sh.j2 in the / root/ansible_playbooks/roles/spark/templates/ directory.

There are too many configuration files. For more information, please see github. The address is https://github.com/jkzhao/ansible-godseye. The contents of the configuration file are no longer explained and have been written in previous blog posts.

4. Write vars

[root@node1 spark] # vim vars/main.yml env_file: / etc/profile# spark-env.sh.j2 file variablesJAVA_HOME: / usr/java/jdk1.8.0_73SCALA_HOME: / usr/local/scala-2.10.6SPARK_MASTER_HOSTNAME: node1SPARK_HOME: / usr/local/spark-1.6.1SPARK_WORKER_MEMORY: 256MHIVE_HOME: / usr/local/apache-hive-2.1.0-binHADOOP_CONF_DIR: / usr / local/hadoop/etc/hadoop/# slave.j2 file variablesSLAVE1_HOSTNAME: node2SLAVE2_HOSTNAME: node3

5. Set up host group

/ etc/ansible/hosts file:

# vim / etc/ansible/hosts [spark] 172.16.7.151172.16.7.152172.16.7.153

6. Working with roles

In the roles sibling directory, create a spark.yml file that defines your playbook.

[root@node1 ansible_playbooks] # vim spark.yml-hosts: spark remote_user: root roles:-spark

Run playbook to install the spark cluster:

[root@node1 ansible_playbooks] # ansible-playbook spark.yml

To use spark role, you need to modify the variables in vars/main.yml and the hosts defined in the / etc/ansible/hosts file according to the actual environment.

[note]: all the files are on github, https://github.com/jkzhao/ansible-godseye.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report