In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what is based on micro-service and Docker container technology". The content of this article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is based on micro-service and Docker container technology".
The construction goal of PaaS cloud platform based on micro-service architecture and Docker container technology is to provide our developers with a set of services for rapid development, deployment, operation and maintenance management, continuous development and continuous integration process. The platform provides infrastructure, middleware, data services, cloud servers and other resources. Developers only need to develop business codes and submit them to the platform code base to make some necessary configurations, and the system will be built and deployed automatically to achieve agile development and rapid iteration of applications. In terms of system architecture, PaaS cloud platform is mainly divided into three parts: micro-service architecture, Docker container technology and DveOps. This article focuses on the implementation of micro-service architecture.
The implementation of micro-services requires a lot of technical effort to develop infrastructure, which is obviously unrealistic for many companies. Don't worry, the industry already has a very good open source framework for our reference. At present, the more mature micro-service frameworks in the industry are Netflix, Spring Cloud and Ali's Dubbo. Spring Cloud is a complete framework for the implementation of micro-services based on Spring Boot, which provides the components needed for the development of micro-services. If used with Spring Boot, it will be very convenient to develop cloud services of micro-services architecture. Spring Cloud contains many sub-frameworks, of which Spring Cloud Netflix is a set of frameworks. In our micro-service architecture design, we use many components of the Spring Cloud Netflix framework. The time of the Spring Cloud Netflix project is not long, and there are few documents related to it. Bloggers studied this framework and ate a lot of English documents at that time, which was simply painful. For students who have just come into contact with this framework, they may not know how to build a micro-service application architecture. Next, we will introduce our micro-service architecture construction process and what frameworks or components are needed to support the micro-service architecture.
In order to directly show the composition and principle of the micro-service architecture, the blogger drew a system architecture diagram, as follows:
As can be seen from the figure above, the access path of micro service is: external request → load balancer → service gateway (GateWay) → micro service → data service / message service. Both service gateways and micro-services use service registration and discovery to invoke other dependent services, and each service cluster can obtain configuration information by configuring central services.
Service Gateway (GateWay)
The gateway is a door between the external system (such as client browser, mobile device, etc.) and the internal system of the enterprise. All clients request to access the background service through the gateway. In order to cope with high concurrent access, the service gateway is deployed in the form of cluster, which means that we need to do load balancing. We use Amazon EC2 as the virtual cloud server and ELB (Elastic Load Balancing) as the load balancer. EC2 has the function of automatically configuring capacity, and when user traffic reaches its peak, EC2 can automatically add more capacity to maintain the performance of the virtual host. ELB elastic load balancer automatically distributes the incoming traffic of applications among multiple instances. In order to ensure security, client requests need to be protected by https encryption, which requires us to uninstall SSL and use Nginx to unload encrypted requests. The external request is routed to a GateWay service in the GateWay cluster after ELB load balancing, and the GateWay service is forwarded to the micro service. As the boundary of the internal system, the service gateway has the following basic capabilities:
1. Dynamic routing: dynamically route the request to the required backend service cluster. Although the interior is a complex distributed micro-service mesh structure, the external system looks like a whole service from the gateway, and the gateway shields the complexity of the back-end service.
2. Current limit and fault tolerance: allocate capacity for each type of request, discard external requests when the number of requests exceeds the threshold, limit traffic, and protect backend services from being washed down by heavy traffic; when the internal service fails, create some responses directly at the boundary and focus on fault-tolerant processing, instead of forwarding the request to the internal cluster to ensure a good user experience.
3. Identity authentication and security control: authenticate each external request, reject the request that did not pass the authentication, and realize the anti-crawler function through access pattern analysis.
4. Monitoring: the gateway can collect meaningful data and statistics to provide data support for backend service optimization.
5. Access log: the gateway can collect access log information, such as which service is accessed? The handling process (what exceptions occurred) and the result? How long does it take? By analyzing the contents of the log, the background system is further optimized.
We use Zuul, an open source component of the Spring Cloud Netflix framework, to implement gateway services. Zuul uses a series of different types of filters (Filter). By rewriting the filters, we can flexibly implement various functions of the gateway (GateWay).
Service registration and discovery
Because the micro-service architecture is a network structure composed of a series of fine-grained services with a single responsibility, the services communicate with each other through a lightweight mechanism, which introduces the problem of service registration and discovery, and the service provider needs to register and report the service address. service invocation should be able to discover the target service. Eureka components are used in our micro-service architecture to implement service registration and discovery. All micro services (by configuring Eureka service information) are registered with the Eureka server and regularly send heartbeats for health check. The default configuration of Eureka is to send a heartbeat every 30 seconds, indicating that the service is still alive. The interval between sending heartbeats can be configured by yourself through the configuration parameters of Eureka. After receiving the last heartbeat of the service instance, the Eureka server needs to wait 90 seconds (default configuration 90 seconds). You can modify the configuration parameters before the service is determined to be dead (that is, no heartbeat has been received for 3 consecutive times), and the registration information of the service will be cleared when the Eureka self-protection mode is turned off. The so-called self-protection mode means that when there is a network partition and Eureka loses too many services in a short period of time, it will enter the self-protection mode, that is, a service will not send a heartbeat for a long time and Eureka will not delete it. Self-protection mode is on by default and can be set to off by configuration parameters.
Eureka services are deployed as clusters (the deployment of Eureka clusters is described in detail in another blogger's article). All Eureka nodes in the cluster automatically synchronize the registration information of microservices on a regular basis, which ensures that the registration information of all Eureka services is consistent. So how do Eureka nodes find other nodes in an Eureka cluster? We use the DNS server to establish the association of all Eureka nodes, and we need to build the DNS server in addition to the deployment of the Eureka cluster.
When the gateway service forwards external requests or the background micro-services invoke each other, it will go to the Eureka server to find the registration information of the target service, find the target service and invoke it, thus forming the whole process of service registration and discovery. Eureka has a large number of configuration parameters, up to hundreds, which bloggers will explain in detail in another article.
Micro service deployment
Micro-service is a series of services with single responsibility and fine granularity, which divides our business into independent service units with good scalability and low coupling. Different micro-services can be developed in different languages. Each service handles a single business. Micro services can be divided into front-end services (also called edge services) and back-end services (also called intermediate services). Front-end services are necessary to aggregate and tailor the back-end services and expose them to different external devices (PC, Phone, etc.). All services will register with the Eureka server when starting, and there will be complex dependencies between services. When the gateway service forwards an external request to invoke the front-end service, it can be found that the target service invokes by querying the service registry, and the same is true when the front-end service invokes the back-end service. a request may involve the mutual invocation of multiple services. Because each micro-service is deployed in the form of a cluster, it is necessary to do load balancing when the services are called each other, so there is a LB component in each service to achieve load balancing.
The microservice runs in a Docker container in the form of a mirror. Docker container technology makes our service deployment simple and efficient. In the traditional deployment method, the running environment needs to be installed on each server. If we have a large number of servers, it will be an extremely heavy task to install the running environment on each server. Once the running environment changes, it has to be reinstalled, which is simply catastrophic. Using Docker container technology, we only need to generate a new image of the required basic image (jdk, etc.) and micro-service, and deploy the final image to run in the Docker container, which is simple, efficient and can quickly deploy the service. Multiple microservices can be run in each Docker container, Docker containers are deployed in a cluster, and these containers are managed using Docker Swarm. We create an image repository to store all the basic images and the generated final delivered images, and manage all the images in the image repository.
Service fault tolerance
There are complex dependencies between micro-services, and a request may rely on multiple back-end services, which may cause failures or delays in actual production. In a high-traffic system, once a service produces a delay, it may exhaust system resources in a short period of time and drag down the whole system, so it is catastrophic if a service cannot isolate and tolerate its faults. We use Hystrix components for fault-tolerant processing in our micro-service architecture. Hystrix is an open source component of Netflix, which provides flexible fault-tolerant protection for services through fuse mode, isolation mode, fallback and current limiting mechanisms to ensure the stability of the system.
1. Fuse mode: the fuse mode principle is similar to the circuit fuse, when the circuit is short-circuited, the fuse is fused to protect the circuit from catastrophic loss. When the service is abnormal or has a large delay, when the circuit breaker condition is satisfied, the service caller will initiate the circuit breaker actively, execute the fallback logic and return directly, and will not continue to call the service to further drag down the system. Fuse default configuration service call error rate threshold is 50%, exceeding the threshold will automatically start the fuse mode. After the service is isolated for a period of time, the fuse enters the semi-fuse state, that is, it allows a small number of requests to try, returns to the fuse state if the call still fails, and closes the fuse mode if the call is successful.
2. Isolation mode: Hystrix uses thread isolation by default. Different services use different thread pools and are not affected by each other. When a service fails to exhaust its thread pool resources, the normal operation of other services will not be affected, thus achieving the effect of isolation. For example, we configure a service through andThreadPoolKey to use a thread pool named TestThreadPool to achieve isolation from other named thread pools.
3. Fallback: the fallback mechanism is actually a fault-tolerant way in case of service failure, and its principle is similar to the exception handling in Java. You just need to inherit HystixCommand and override the getFallBack () method, where you write processing logic, such as throwing an exception directly (quick failure), returning null or default values, or returning backup data. When an exception occurs in the service invocation, it turns to getFallBack (). Fallback can be triggered in the following situations:
1) the program throws a non-HystrixBadRequestExcepption exception. When a HystrixBadRequestExcepption exception is thrown, the calling program can catch the exception without triggering fallback. When other exceptions are thrown, fallback will be triggered.
2) the program timed out
3) fuse start
4) the thread pool is full.
4. Current limit: current limit refers to limiting the number of concurrent visits to the service, setting the number of concurrency per unit time, rejecting requests that exceed the limit and fallback to prevent the backend service from being washed out.
Hystix uses command mode HystrixCommand to wrap dependent invocation logic so that related calls are automatically under Hystrix's resilient fault-tolerant protection. The caller needs to inherit HystrixCommand and write the calling logic in run (), using execute () (synchronous blocking) or queue () (asynchronous non-blocking) to trigger the execution of run ().
Dynamic configuration center
Microservices have many dependent configurations, and some configuration parameters may be dynamically modified during service operation, such as dynamically adjusting the circuit breaker threshold according to access traffic. The traditional methods to implement information configuration, such as xml, yml and other configuration files, are packaged with the application, and each modification has to resubmit the code, package and build, generate a new image, and restart the service, which is obviously unreasonable, so we need to build a dynamic configuration center service to support the dynamic configuration of micro-services. We use Spring Cloud's configserver service to help us build a dynamic configuration center. The micro-service code we developed is stored in the private warehouse of the git server, and all the configuration files that need to be dynamically configured are stored in the configserver (configuration center, also a micro-service) service under the git server. The micro-service deployed to the Docker container dynamically reads the configuration file information from the git server. When the local git repository modifies the code, push goes to the git server warehouse, and the git server hooks (post-receive, which is called automatically after the code update is completed) automatically detects whether the configuration file is updated. If so, the git server sends a message to the configuration center (configserver, a microservice deployed in the container) through message queue, informing the configuration center to refresh the corresponding configuration file. In this way, the micro-service can get the latest profile information and realize dynamic configuration.
These frameworks or components are the core of supporting the implementation of micro-service architecture. In actual production, we will also use many other components, such as log service components, message service components, and so on, which we choose to use according to business needs. In our microservice architecture implementation case, we refer to many open source components of the Spring Cloud Netflix framework, including Zuul (Service Gateway), Eureka (Service Registration and Discovery), Hystrix (Service Fault tolerance), Ribbon (client load balancing) and so on. These excellent open source components provide a shortcut for us to implement the micro-service architecture.
Thank you for your reading. the above is the content of "what is based on Micro Services and Docker Container Technology". After the study of this article, I believe you have a deeper understanding of what is based on Micro Services and Docker Container Technology. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.