Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

A case study of deploying surging distributed Micro Service engine based on docker

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you the deployment of surging distributed micro-service engine based on docker case, I believe that most people do not know much about it, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to know it!

1. Preface

Surging has been open source for a year in a twinkling of an eye. After a year of polishing, surging has changed from the distributed micro-service framework originally deployed in window to the distributed micro-service engine that can be deployed in docker using rancher for service orchestration, and then divest the business, and then load the business module through the configuration path. This fine-grained design makes it more flexible to subdivide the objects from the business. Can split aggregation services more flexibly. How to deploy based on docker

2. Overview

A container is a vessel used to store an image, which is built into a lightweight, independent, executable package that includes everything you need to execute it: code, runtime environment, system tools, system libraries, settings.

The program is built into a mirror and placed in a container, so the underlying environment on which it depends is no longer important. It can run anywhere, even in a hybrid cloud environment. So why can containers become popular, while the rise of container technology makes Docke gradually come into view?

So what is Docker?

Docker is a container engine developed and open source based on Go.

Docker packages everything needed for the application to run into separate containers

Docker can automate and configure development / online environments, quickly build, test and run complex multi-container applications

Docker can also quickly extend and provision applications with thousands of nodes or containers

It can run on mainstream Linux systems, Mac and Windows, and ensure that no matter where the software is deployed, it will run properly and get the same results.

Introduction of related concepts

Image image and Container container: you can think of both as class and instance objects, or the relationship between ISO system images and virtual machines. Different Image contains different software or environments, but you can manage them using Dockerfile (files created by docker-specific syntax rules). Container is a micro-system that uses Image as a template and can run independently. One Image can create multiple instances of Container containers.

Registry:Docker Hub image repository, which provides huge image resources for everyone to pull and use.

Dockerfile: a file that combines mirror commands together for automatic construction of Image

3. Environment building

System environment

Host: Windows 10 Professional Edition

Linux server: CentOS 3.10

1. Install Docker

Docker requires the kernel version of the CentOS system to be higher than 3.10. check the prerequisites on this page to verify that your version of CentOS supports Docker.

Check your current kernel version with the uname-r command

[root@runoob] # uname-r 3.10.0-862.E17.X86_64

# install docker package for yum install docker-engine

After the installation is successful, use the docker version command to check whether the installation is successful. After the installation is successful-as shown in the following figure

Start Docker

Systemctl start docker

View the docker information, as shown below

Systemctl status docker

Test run hello-world

# docker run hello-world

two。 Install rancher

Download the image

Docker pull rancher/server

Start rancher

Docker run-d-restart=always-p 8080 rancher/server

After the installation is successful, access through http://ip:8080, as shown in the following figure

3. Install rabbitmq

Download the image

Docker run-d-restart=always-p 8080 rancher/server

The copy code is as follows:

# docker run-d-- name rabbitmq-- publish 5672-- publish 4369-- publish 25672-- publish 15671-- publish 15672\ rabbitmq:management

After the installation is successful, access through http://ip:15672, as shown in the following figure

4. Install Consul

Download the image

# docker pull docker.io/consul:latest

Create a Consul configuration

# vim / opt/platform/consul/server.json {"datacenter": "quark-consul", "data_dir": "/ consul/data", "server": true, "ui": true, "bind_addr": "192.168.249.162", "client_addr": "192.168.249.162", "bootstrap_expect": 1, "retry_interval": "10s", "rejoin_after_leave": false "skip_leave_on_interrupt": true}

Configuration description

Officially, when starting the container, part of the configuration is used as a parameter to docker run, while I write the parameters in the configuration file.

Datacenter: data center name (library name)

Data_dir: data storage directory

Server: running in server mode

Ui: using the UI interface

Bind_addr: the address of the internal cluster communication binding. The default is 0.0.0.0. If there are multiple network cards, you need to specify them, otherwise start the error report.

Client_addr: the address bound to the client API. Default is 127.0.0.1.

Retry_join: rejoin the cluster

Retry_interval: retry time

Rejoin_after_leave: try to join again after leaving the cluster

Skip_leave_on_interrupt: whether Ctrl+C exits gracefully after startup, we are in container mode, so don't worry about it, just true it.

Start consul-server

The copy code is as follows:

Docker run-d-- net=host-- name consul-v / opt/platform/consul/config:/consul/config-v / opt/platform/consul/data:/consul/data consul agent

After the installation is successful, access through http://ip:8500, as shown in the following figure

5. Install dotnetcore 2.1 runtime

Download the image

# sudo docker pull microsoft/dotnet:2.1-runtime

Start

# sudo docker run-it microsoft/dotnet:2.1-runtime

III. Deployment procedures

1. Deploy the surging engine without referring to any business modules and create a new Dockerfile file

FROM microsoft/dotnet:2.1-runtimeWORKDIR / appCOPY. .EntRYPoint ["dotnet", "Surging.Services.Server.dll"]

Release program

Dotnet publish-r centos.7-x64-c release

Create a mirror using Dockerfile

# docker build-t surgingserver.

Start

# docker run-- name surgingserver-- env Mapping_ip=192.168.249.162-- env Mapping_Port=198-- env RootPath=/home/fanly-- env Register_Conn=192.168.249.162:8500-- env EventBusConnection=172.17.0.4-- env Surging_Server_IP=0.0.0.0-v / home/fanly:/home/fanly-it-p 198198 surgingserver

Configuration description

Mapping_ip: external IP of the map (environment variable)

Mapping_port: mapped external port (environment variable)

RootPath: the root path stored by the business module (environment variable)

Register_Conn: registry address (environment variable)

EventBusConnection:eventbus address (environment variable)

Surging_Server_IP: container internal IP (environment variable)

After startup, the rancher is shown in the following figure

For convenience, the host directory is mounted. Microsurging is the distributed micro-service engine, Modules is the business module directory, and surgingapi is the gateway.

two。 Deploy the surging gateway and create a new Dockerfile file

FROM microsoft/dotnet:2.1-runtimeWORKDIR / appCOPY. .EntRYPoint ["dotnet", "Surging.ApiGateway.dll"]

Release program

Dotnet publish-r centos.7-x64-c release

Create a mirror using Dockerfile

Docker build-t surgingapi.

Start

The copy code is as follows:

# docker run-name surgingapi-it-p 729 it 729-env Register_Conn=192.168.249.162:8500 surgingapi

After startup, the rancher is shown in the following figure

Can be accessed through http://ip:729

You can then test the gateway through postman, as shown in the following figure

These are all the contents of the article "the case of deploying a surging distributed micro-service engine based on docker". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report