In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to use Kolla-Ansible to deploy OpenStack Pike on a single node of CentOS 7. I hope you will get something after reading this article. Let's discuss it together.
Special instructions
Although it is a single-node deployment, as long as you modify the host manifest file, you can also deploy multi-node with basically the same other configuration.
Two network cards are used here, except for Neutron's virtual network on eth2, all other services are placed on eth0.
Because eth2 is placed in the network NameSpace of qrouter-XXX by Neutron, the host cannot communicate directly through eth2.
To use eth2's network, you must use the "ip netns exec qrouter-XXX" command to execute the relevant commands, or set up a bridge and bind eth2 and a virtual network card, give the virtual network card to Neutron, and the host uses eth2 through the bridge.
Because there is only one node, the "HAProxy" service is disabled and all VIP is set to the IP of eth0.
Basic environment
To install the CentOS 7 X64 1708 distribution in VMware14, it is best to reinstall a clean and compact system
System memory 4G, free hard disk space 30g, kernel version "3.10.0-693.11.6.el7.x86_64"
There are two network cards, eth0 is "192.168.195.170" and eth2 is "192.168.162.170"
Modify "/ etc/hosts" by adding "192.168.195.170 controller" in it.
Version requirement
The official requirements for installing software packages related to the Pike version are as follows:
Component Min Version Max Version CommentAnsible 2.2.0 none On deployment hostDocker 1.10.0 none On target nodesDocker Python 2.0.0 none On target nodesPython Jinja2 2.8.0 none On deployment host system Service configuration
Start the NTP service:
$systemctl enable ntpd.service & & systemctl start ntpd.service & & systemctl status ntpd.service
Shut down the libvirtd service:
$systemctl stop libvirtd.service & & systemctl disable libvirtd.service & & systemctl status libvirtd.service
Turn off the firewall:
$systemctl stop firewalld & & systemctl disable firewalld & & systemctl status firewalld installation and configuration Docker service installation package
If so, uninstall the old Docker, otherwise it may not be compatible:
$yum remove-y docker docker-io docker-selinux python-docker-py
Add Docker's Yum repository:
$vi / etc/yum.repos.d/ docker.repos [dockerrepo] name=Docker Repositorybaseurl= https://yum.dockerproject.org/repo/main/centos/$releasever/enabled=1gpgcheck=1gpgkey=https://yum.dockerproject.org/gpg
Install the Docker package:
$yum update$ yum install-y epel-release$ yum install-y docker-engine docker-engine-selinux configure domestic image
Use Ali's Docker image service (you can also apply for an address yourself):
$mkdir-p / etc/docker$ vi / etc/docker/daemon.json {"registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]}
Restart the Docker service:
$systemctl daemon-reload & & systemctl enable docker & & systemctl restart docker & & systemctl status docker
Check whether the image service is normal:
$docker run-- rm hello-worldHello from docking this message shows that your installation appears to be working correctly.To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.To try something more ambitious, you can run an Ubuntu container with: $docker run-it ubuntu bashShare images, automate workflows, and more with a free Docker ID: https://cloud.docker.com/For more examples and ideas, visit: https://docs.docker.com/engine/userguide/ configure local Registry service
Run the Register container and map to port 4000:
$docker run-d-- name registry-- restart=always-p 4000 purl 5000-v / opt/registry:/var/lib/registry registry:2
Modify the Docker service configuration and trust the local Registry service:
$vi / usr/lib/systemd/system/docker.service...#ExecStart=/usr/bin/dockerdExecStart=/usr/bin/dockerd-- insecure-registry controller:4000...
Restart the Docker service:
$systemctl daemon-reload & & systemctl restart docker
Test whether the Registry service is working:
$curl-X GET http://controller:4000/v2/_catalog{"repositories":[]} configure Docker options for Kolla-Ansible
Configure Docker shared mount:
$mkdir-pv / etc/systemd/system/docker.service.d$ vi / etc/systemd/system/docker.service.d/ Kolla.confession [service] MountFlags=shared
Restart the Docker service:
$systemctl daemon-reload & & systemctl restart docker & & systemctl status docker imports the Docker image of Kolla-Ansible
Download the Docker image of OpenStack Pike:
$wget http://tarballs.openstack.org/kolla/images/centos-source-registry-pike.tar.gz
Decompress the Docker image of OpenStack Pike:
$tar zxvf centos-source-registry-pike.tar.gz-C / opt/registry/
Check whether the Docker image has been added to the Registry service (restart the Registry container if you cannot see the following information):
$curl-X GET http://controller:4000/v2/_catalog{"repositories":["lokolla/centos-source-aodh-api","lokolla/centos-source-aodh-base","lokolla/centos-source-aodh-evaluator","lokolla/centos-source-aodh-expirer","lokolla/centos-source-aodh-listener","lokolla/centos-source-aodh-notifier","lokolla/centos-source-barbican-api","lokolla/centos-source-barbican-base", "lokolla/centos-source-barbican-keystone-listener", "lokolla/centos-source-barbican-worker", "lokolla/centos-source-base", "lokolla/centos-source-bifrost-base", "lokolla/centos-source-bifrost-deploy", "lokolla/centos-source-blazar-api", "lokolla/centos-source-blazar-base", "lokolla/centos-source-blazar-manager", "lokolla/centos-source-ceilometer-api", "lokolla/centos-source-ceilometer-base", "lokolla/centos-source-ceilometer-central" "lokolla/centos-source-ceilometer-collector", "lokolla/centos-source-ceilometer-compute", "lokolla/centos-source-ceilometer-ipmi", "lokolla/centos-source-ceilometer-notification", "lokolla/centos-source-ceph-base", "lokolla/centos-source-ceph-mds", "lokolla/centos-source-ceph-mon", "lokolla/centos-source-ceph-osd", "lokolla/centos-source-ceph-rgw", "lokolla/centos-source-cephfs-fuse", "lokolla/centos-source-chrony" "lokolla/centos-source-cinder-api", "lokolla/centos-source-cinder-backup", "lokolla/centos-source-cinder-base", "lokolla/centos-source-cinder-scheduler", "lokolla/centos-source-cinder-volume", "lokolla/centos-source-cloudkitty-api", "lokolla/centos-source-cloudkitty-base", "lokolla/centos-source-cloudkitty-processor", "lokolla/centos-source-collectd", "lokolla/centos-source-congress-api", "lokolla/centos-source-congress-base" "lokolla/centos-source-congress-datasource", "lokolla/centos-source-congress-policy-engine", "lokolla/centos-source-cron", "lokolla/centos-source-designate-api", "lokolla/centos-source-designate-backend-bind9", "lokolla/centos-source-designate-base", "lokolla/centos-source-designate-central", "lokolla/centos-source-designate-mdns", "lokolla/centos-source-designate-pool-manager", "lokolla/centos-source-designate-sink" "lokolla/centos-source-designate-worker", "lokolla/centos-source-dind", "lokolla/centos-source-dnsmasq", "lokolla/centos-source-dragonflow-base", "lokolla/centos-source-dragonflow-controller", "lokolla/centos-source-dragonflow-metadata", "lokolla/centos-source-dragonflow-publisher-service", "lokolla/centos-source-ec2-api", "lokolla/centos-source-elasticsearch", "lokolla/centos-source-etcd", "lokolla/centos-source-fluentd" "lokolla/centos-source-freezer-api", "lokolla/centos-source-freezer-base", "lokolla/centos-source-glance-api", "lokolla/centos-source-glance-base", "lokolla/centos-source-glance-registry", "lokolla/centos-source-gnocchi-api", "lokolla/centos-source-gnocchi-base", "lokolla/centos-source-gnocchi-metricd", "lokolla/centos-source-gnocchi-statsd", "lokolla/centos-source-grafana", "lokolla/centos-source-haproxy" "lokolla/centos-source-heat-all", "lokolla/centos-source-heat-api", "lokolla/centos-source-heat-api-cfn", "lokolla/centos-source-heat-api-cloudwatch", "lokolla/centos-source-heat-base", "lokolla/centos-source-heat-engine", "lokolla/centos-source-helm-repository", "lokolla/centos-source-horizon", "lokolla/centos-source-influxdb", "lokolla/centos-source-ironic-api", "lokolla/centos-source-ironic-base" "lokolla/centos-source-ironic-conductor", "lokolla/centos-source-ironic-inspector", "lokolla/centos-source-ironic-pxe", "lokolla/centos-source-iscsid", "lokolla/centos-source-karbor-api", "lokolla/centos-source-karbor-base", "lokolla/centos-source-karbor-operationengine", "lokolla/centos-source-karbor-protection", "lokolla/centos-source-keepalived", "lokolla/centos-source-keystone", "lokolla/centos-source-keystone-base" "lokolla/centos-source-keystone-fernet", "lokolla/centos-source-keystone-ssh", "lokolla/centos-source-kibana", "lokolla/centos-source-kolla-toolbox", "lokolla/centos-source-kube-apiserver-amd64"]}
If the above images cannot be listed, restart the Registry container:
$docker restart registry install and configure Kolla-Ansible installation Kolla-Ansible
Install dependent packages:
$yum install-y python-devel python-pip libffi-devel gcc openssl-devel git
If a urllib3 error occurs later, execute the following command twice (select "y" in the command line interaction that occurs):
$pip uninstall urllib3 & & pip install urllib3
Upgrade pip:
$pip install-U pip
If there is a docker-py, uninstall it, otherwise an error may be reported later:
$pip uninstall docker-py
Install Kolla-Ansible:
$pip install shade docker kolla ansible kolla-ansible configuration Kolla-Ansible
Copy the default Kolla-Ansible manifest file:
$mkdir-pv / opt/kolla/config$ cd / opt/kolla/config$ cp-rv / usr/share/kolla-ansible/ansible/inventory/*.
Copy the default Kolla-Ansibe global configuration file:
$mkdir-pv / etc/kolla/$ cp-rv / usr/share/kolla-ansible/etc_examples/kolla/* / etc/kolla/
Configure Nova, because it is in a virtual machine, so use qemu instead of kvm:
$mkdir-pv / etc/kolla/config/nova$ vi / etc/kolla/config/nova/nova- compute.confs [libvirt] virt_type=qemucpu_mode = none
Generate a random password file:
$kolla-genpwd
Modify the admin password, which will be used in the following Web page:
$vi / etc/kolla/passwords.yml...keystone_admin_password: admin...
Modify the global configuration:
$vi / etc/kolla/globals.yml...docker_registry: "controller:4000" docker_namespace: "lokolla" kolla_install_type: "source" openstack_release: "5.0.1"... kolla_internal_vip_address: "192.168.195.170" network_interface: "eth0" neutron_external_interface: "eth2". Enable_haproxy: "no".
Generate SSH Key and grant credit to this node:
$ssh-keygen$ ssh-copy-id-I ~ / .ssh/id_rsa.pub root@controller
Configure a single-node manifest file (there is currently only one node):
$/ opt/kolla/config/$ vi all-in-one... [control] # localhost ansible_connection= localcontroller [network] # localhost ansible_connection= localcontroller [compute] # localhost ansible_connection=localcontroller [storage] # localhost ansible_connection= localcontroller [deployment] # localhost ansible_connection= localcontroller [deployment] # localhost ansible_connection=localcontroller... Deploy OpenStack
Check that the configuration is correct:
$kolla-ansible prechecks-I all-in-one
Pull the image out of Registry in advance:
$kolla-ansible pull-I all-in-one
Start deploying OpenStack:
$kolla-ansible deploy-I all-in-one
Generate the environment variable settings script:
$kolla-ansible post-deploy-I all-in-one$ cat / etc/kolla/admin-openrc.shexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_NAME=adminexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL= http://192.168.195.170:35357/v3export OS_INTERFACE=internalexport OS_IDENTITY_API_VERSION=3export OS_REGION_NAME=RegionOne
If you want to clean up the deployed OpenStack, execute the following command:
$kolla-ansible destroy-I all-in-one-- yes-i-really-really-mean-it verifies that OpenStack initializes the basic environment
Install the OpenStack command line tool:
$pip install python-openstackclient$ pip install python-neutronclient$ which openstack/usr/bin/openstack
Set the environment variable:
$. / etc/kolla/admin-openrc.sh
Edit the network configuration in the initialization script:
$vi / usr/share/kolla-ansible/init-runonce...EXT_NET_CIDR='192.168.162.0/24'EXT_NET_RANGE='start=192.168.162.50,end=192.168.162.100'EXT_NET_GATEWAY='192.168.162.1'...
Initialize the basic operating environment (image, network, etc.):
$/ usr/share/kolla-ansible/init-runonce
View routing information:
$openstack router list +-- + -+ | ID | Name | Status | State | Distributed | HA | Project | +-- + -+-+ | db705217-8d02-4a02-a172-8f604ed24686 | demo-router | ACTIVE | UP | False | False | d888f922844e4e45822969bf9f7d5494 | +- -+-- + $openstack router show demo-router+- -+- -+ | Field | Value | | +-+- - -+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2018-01 | -18T01:58:39Z | | description | | | distributed | False | | | external_gateway_info | {"network_id": "625bc00d-cbc5-40ed-9821-0a1768d6737f" "enable_snat": true, "external_fixed_ips": [{"subnet_id": "8a078bf2-ebc7-423b-90b4-c2bcf2abfffb" "ip_address": "192.168.162.53"}]} | | flavor_id | None | | | ha | False | | id | | | db705217-8d02-4a02-a172-8f604ed24686 | | interfaces_info | [{"subnet_id": "464e9329-6140-4471-b421-1b5bc48cf567" | "ip_address": "10.0.0.1" "port_id": "15509c1e-7706-4e3b-bb24-1c5eb862b1a6"}] | name | demo-router | | project_id | d888f922844e4e45822969bf9f7d5494 | Revision_number | 4 | | routes | | | status | ACTIVE | | | tags | | | updated_at | 2018-01-18T01:59:01Z | | | +- -+
View a list of networks:
$openstack network list +-- +-+ | ID | Name | | Subnets | +-- +-+-- + | 2e2e4e65-1661-45a3-af53-04cc5c2838e6 | demo- | Net | 464e9329-6140-4471-b421-1b5bc48cf567 | | 625bc00d-cbc5-40ed-9821-0a1768d6737f | public1 | 8a078bf2-ebc7-423b-90b4-c2bcf2abfffb | +-- + -- + $openstack subnet list+--+-+--+-+ | ID | Name | Network | Subnet | +-- +-- -- +-+ | 464e9329-6140-4471-b421-1b5bc48cf567 | demo-subnet | 2e2e4e65-1661-45a3-af53-04cc5c2838e6 | 10.0.0.08a078bf2-ebc7 24 | | 8a078bf2-ebc7-423b-90b4-c2bcf2abfffb | public1-subnet | 625bc00d-cbc5-40ed-9821-0a1768d6737f | 192.168.162.0gamma 24 | +- -- +
View subnet information:
$openstack subnet show public1-subnet+-+--+ | Field | Value | +-- -- + | allocation_pools | 192.168.162.50-192.168.162.100 | | cidr | 192.168.162.0and24 | | created_at | 2018-01-18T01:58:26Z | | description | | | dns_nameservers | | enable_dhcp | False | | gateway_ip | 192.168.162.1 | | host_routes | | id | 8a078bf2 | -ebc7-423b-90b4-c2bcf2abfffb | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | public1-subnet | | network_id | 625bc00d-cbc5-40ed -9821-0a1768d6737f | | project_id | d888f922844e4e45822969bf9f7d5494 | | revision_number | 0 | segment_id | None | | service_types | | subnetpool_id | None | | tags | updated_at | 2018-01-18T01:58:26Z | +-+-- + $openstack subnet show demo-subnet+- | -+ | Field | Value | +-+- -+ | allocation_pools | 10.0.0.2-10.0.0.254 | | cidr | 10.0.0.0 created_at 24 | created_at | 2018-01-18T01:58:33Z | | description | | | dns_nameservers | 8.8.8.8 | | enable_dhcp | True | | gateway_ip | 10.0.0.1 | | host_routes | | id | 464e9329-6140 | -4471-b421-1b5bc48cf567 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | demo-subnet | | network_id | 2e2e4e65-1661-45a3- Af53-04cc5c2838e6 | | project_id | d888f922844e4e45822969bf9f7d5494 | | revision_number | 0 | | segment_id | None | | service_types | | subnetpool_id | None | | tags | updated_at | 2018-01-18T01:58:33Z | +-create a virtual machine |
Create a virtual machine template:
$openstack flavor create-- id 0-- vcpus 1-- ram 64-- disk 1 m 1. Nanotubes talk about + | Field | Value | +-- +-+ | OS-FLV- DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | disk | 1 | id | 0 | name | m1.nano | | os-flavor-access:is_public | True | | properties | | ram | 64 | | rxtx_factor | 1.0 | | swap | vcpus | 1 | +-+-+ |
View existing virtual machine templates:
$openstack flavor list +-+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | + -+-+ | 0 | m1.nano | 64 | 1 | 0 | True | | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | 4 | m1.large | 8192 | 80 | 0 | True | 5 | m1.xlarge | 16384 | 16384 | 8 | True | + -+
Create and start a virtual machine (if you have multiple nodes, you can manually specify that the virtual machine is created on the compute01 node using a parameter like "--availability-zone nova:compute01"):
$openstack server create-image cirros-flavor m1.nano-key-name mykey-nic net-id=2e2e4e65-1661-45a3-af53-04cc5c2838e6 demo1+--+ + | Field | Value | +-- -+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | OS-EXT-SRV-ATTR:host | None | | OS -EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | | accessIPv4 | accessIPv6 | addresses | | | adminPass | vjqwZUjeo2xJ | | config_drive | created | 2018-01-18T02:05:23Z | | | flavor | m1.nano (1) | | hostId | | id | c7de4969-15d3-4008-b33e-39d9918c3d3e | | image | | | cirros (9f37578a-d1ca-478e-b0aa-baa9f768b271) | | key_name | mykey | | name | demo1 | | progress | 0 | | | project_id | d888f922844e4e45822969bf9f7d5494 | | properties | security_groups | name='default' | | status | | | BUILD | | updated | 2018-01-18T02:05:23Z | | user_id | 706d089591be428e9b71ab1d9ebb0ec5 | | volumes_attached | | | +-+-+ |
View virtual machine information:
$openstack server show demo1+--+---+ | Field | Value | | +-- -- + | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | CentOS7-LR | | OS-EXT-SRV-ATTR:hypervisor_hostname | CentOS7-LR | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | Running | | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2018-01-| 18T02:05:45.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | accessIPv6 | addresses | demo-net=10.0.0.9 | | config_drive | Created | 2018-01-18T02:05:23Z | | flavor | m1.nano (1) | | hostId | 4cc6aee5e2276b69f1a1f80bb213686d15b0db4825fb4e052c36c0e2 | | id | | c7de4969-15d3-4008-b33e-39d9918c3d3e | | image | cirros (9f37578a-d1ca-478e-b0aa-baa9f768b271) | | key_name | mykey | | name | | | demo1 | | progress | 0 | | project_id | d888f922844e4e45822969bf9f7d5494 | | properties | | | security_groups | name='default' | | status | ACTIVE | | updated | | | 2018-01-18T02:05:46Z | | user_id | 706d089591be428e9b71ab1d9ebb0ec5 | | volumes_attached | | +-| -+-+ access the virtual machine through Web
Check demo1's Web console URL (you can also open it through "http://controller"):
$openstack console url show demo1+-+-+ | Field | Value | | +-+ | type | | | novnc | | url | http://192.168.195.170:6080/vnc_auto.html?token=1a7c224c-4e17-4b88-8568-3062377ebf56 | +-- -- +
The admin user password for Horizon is "admin", the user name of the virtual machine is "cirros", and the password is "cubswin:)".
Access the virtual machine through the console
Apply for a "Floating IP" (you can also use the "--floating-ip-address" parameter to specify the IP to apply for):
$openstack floating ip create public1+-+--+ | Field | Value | +-- -- + | created_at | 2018-01-20T07:00:02Z | | description | | fixed_ip_address | None | | floating_ip_address | 192.168. 162.50 | | floating_network_id | 3ba4cc17-f7de-4a3a-b924-c2d5c7f877dc | | id | 28b6b31c-485b-486c-9b1e-f747b3ebdbaa | | name | 192.168.162.50 | | port_id | None | | project_id | c4f85d20c16a4e0eb0a43e6cb6e52a34 | | revision _ number | 0 | | router_id | None | | status | DOWN | | updated_at | 2018-01-20T07:00:02Z | +- -+-+
Add this "Floating IP" to the demo1 virtual machine:
$openstack server add floating ip demo1 192.168.162.50
View the demo1 virtual machine:
$openstack server list+--+--+ | ID | | Name | Status | Networks | Image | Flavor | +-- -+ | 54bc1b6b-7930-4d5e-a661-b414c1eb4a2e | demo1 | ACTIVE | demo-net=10.0.0.9 192.168.162.50 | cirros | m1.nano | +-- +
View the NameSpace of the existing network:
$ip netnsqrouter-8ae967a2-72e6-4b6a-a7f8-a2349e4aa0d1qdhcp-331baf0c-f09d-44a0-a59f-74372ee2da95
Verify that the network is working:
# Test the IP of the virtual machine on the "public1" network from the routed NameSpace. $ip netns exec qrouter-8ae967a2-72e6-4b6a-a7f8-a2349e4aa0d1 ping 192.168.162.5 test the IP of the virtual machine in the "demo-net1" subnet from DHCP's NameSpace. $ip netns exec qdhcp-331baf0c-f09d-44a0-a59f-74372ee2da95 ping 10.0.0.9
Access the virtual machine through "Floating IP" (username "cirros", password "cubswin:)":
$ip netns exec qrouter-8ae967a2-72e6-4b6a-a7f8-a2349e4aa0d1 ssh cirros@192.168.162.50
Access the virtual machine through the subnet IP:
$ip netns exec qdhcp-331baf0c-f09d-44a0-a59f-74372ee2da95 ssh cirros@10.0.0.9
Access the virtual machine through the "neutron-openvswitch-agent" container:
After reading this article, $docker exec-it neutron_openvswitch_agent bash$ ssh cirros@192.168.162.50, I believe you have some understanding of "how to use Kolla-Ansible to deploy OpenStack Pike on a single node of CentOS 7". If you want to know more about it, please follow the industry information channel. Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.