In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Four-layer model of microservice ecosystem Layer 1: Hardware layer
The hardware layer is the bottom layer of the microservices ecosystem. This layer is where the server physical machines are located, and they are the foundation on which all microservices operate. These servers are housed in racks in data centers, powered by power supply systems, and use expensive cooling systems. Some of them are privately owned by certain companies, and some are rented from so-called "cloud service providers" such as AWS EC2, GCP, Alibaba Cloud, etc.
Buying your own server or choosing a Cloud Virtual Machine isn't an easy choice, considering purchase cost, availability, reliability, and operating costs.
Managing servers is one of the responsibilities of the hardware layer. Standard operating systems are required for each server. There is no standard answer to which operating system to use, it all depends on the application to be built, the language used to build the application, and the software packages and tools needed to build the microservices. Mainstream microservices ecosystems typically use variants of Linux, such as CentOS, Debian, or Ubuntu, but a company using the. NET platform obviously has different options.
The installation of the operating system and configuration of hardware resources is the first layer of the server. Each host must be configured, and after the operating system is installed, a configuration management tool (such as Ansible, Chef, or Puppet) must be provided to install the application and make the necessary configuration.
Host-level monitoring of hosts (using Nagios) is necessary, and host-level logging is required. When the host is abnormal (disk error, network or CPU overload), they can be easily diagnosed to help solve the problem.
Layer 2: Communication Layer
Communication layer to all layers of the ecosystem, because interactions between microservices occur at multiple layers, it is difficult to clearly define boundaries between the communication layer and other layers. Although it is difficult to clearly define the boundary between them, the elements involved in this layer are well-defined. Typically includes networking, DNS, RPC and API endpoints, service discovery, service registration, and Load Balancer.
RPC, endpoints, and messaging
Microservices interact with other microservices through remote procedure calls (RPC) or messaging, which are sent over the network to API endpoints of other microservices (if messaging is used, messages are sent to message brokers, which route them). The rationale is that using a specific protocol, one microservice sends data in a specific format over the network to another service (or to another microservice's API endpoint) or to a message broker (which ensures that data is routed to other microservice's API endpoint).
Microservices have several ways of communicating, the first being HTTP+REST/Thrift, the most common. If this is the case, individual services use Hypertext Transfer Protocol (HTTP) for network interaction, sending requests to or receiving responses from specific REST endpoints (using various HTTP methods such as GET, POST, and so on) or Thrift endpoints. The data sent is usually in JSON (or protool buffer) format.
The second type of communication is messaging. Message passing is asynchronous (non-blocking), but relatively complex. Messaging works like this: A microservice sends data (messages) over the network (HTTP or otherwise) to a message broker, which routes the messages to other microservices.
There are also several modes of messaging, the two most popular being publish subscribe and request and response. If you are using the publish-and-subscribe pattern, the client subscribes to a topic from which it will receive any messages published by the publisher. The request-and-response pattern is more straightforward, where the client sends a request to a service (or message broker) that responds to the request. Some messaging middleware supports both patterns, such as Apache Kafka.
Message passing has several drawbacks to note. Messaging is no more scalable than HTTP+REST, so be aware of this if your system requires scalability. Messaging is not change friendly because it is centralized, which causes message queues and message brokers to become failure points for the entire ecosystem. Its asynchronous nature can lead to race conditions in a concurrent environment, and infinite loops if not handled properly. When using messaging, it becomes as stable and efficient as a synchronous solution if these problems are handled properly.
Service Discovery, Service Registration, and Load Balancer
In a monolithic application architecture, all traffic is sent to a Load Balancer and distributed to application servers. In a microservices architecture, traffic is routed to a large number of different applications and then distributed to servers where specific microservices are deployed. In order to efficiently implement these scenarios, microservices architecture needs to implement three technologies at the communication layer: service discovery, service registration, and Load Balancer.
In general, if microservice A needs to make a request to microservice B, microservice A needs to know the IP address and port of microservice B. The communication layer of microservices needs to know the IP addresses and ports of these microservices in order to route these requests correctly. This problem can be solved by service discovery (such as etcd, Consul, Hyperbahn, or Zookeeper), which ensures that requests are routed where they are supposed to be and (importantly) only to working instances. Service discovery requires a service registry, which records the IP addresses and ports of all microservices in the ecosystem.
In microservices architecture, ports and IP addresses change as microservices scale out and redeploy (e.g. using hardware abstraction layers like Apache Mesos). In this case, consider assigning a static port to each microservice.
Unless all of your microservices are deployed on the same instance (which is unlikely), you need to use Load Balancer at the communication layer. To put it simply, a Load Balancer can do this: If you have 10 microservice instances, a Load Balancer (software or hardware) can ensure that traffic is distributed (evenly) across all instances. In a microservices ecosystem, a Load Balancer is required whenever request forwarding is involved, which means that a large microservices ecosystem will include multiple layers of Load Balancer. Common Load Balancer include Amazon Web Services Elastic Load Balancer, Netflix's Eureka, and Nginx.
Layer 3: Application Platform Layer
The application platform layer is the third layer of the microservices ecosystem, which contains all the internal tools and services that are independent of microservices. This layer contains centralizations and services that span the entire ecosystem, because with these tools and services, microservice development teams can focus on microservice development.
A good application platform needs to provide developers with a set of internal self-service tools, including standardized development processes, centralized automated build and release systems, automated testing, standardized and centralized deployment scenarios, and centralized logging and microservice-level monitoring. The details of these elements will not be discussed here, but we will briefly introduce a few of them to illustrate some basic concepts.
Internal self-service development tools
There are many things that can be categorized as internal self-help development tools, and whether they can be categorized as such depends not only on the developer's tool needs, but also on the overall abstraction and complexity of the infrastructure and ecosystem. Deciding which tool to use starts with segmenting the areas of responsibility and then evaluating the tasks developers have to accomplish to design, build, and maintain their services.
In a company that already uses microservices architecture, assign responsibilities to the engineering team with caution. The simplest approach is to have an engineering sub-team for each layer component of the microservices ecosystem. These engineering sub-teams will be responsible for handling all the relevant matters at their layers: the operations team for layer 1, the infrastructure team for layer 2, the application platform team for layer 3, and the microservices team for layer 4.
In this organizational structure, engineers working at the top need to configure something at the bottom using self-service tools. For example, the messaging team should provide a self-service tool for other developers to use when microservices developers need to configure messaging systems for their services without having to learn too much about the complexities of messaging systems.
There are reasons to use these centralized self-help tools. In a diverse ecosystem of microservices, the average engineer on one team has little or no knowledge of the systems and services of the other team, and they cannot be called experts in everything. Each developer knows his or her own part, but when you look at the ecosystem as a whole, these developers together know everything. Build user-friendly interfaces for each part of the ecosystem and train developers on how to use these tools, rather than trying to get each developer to understand the intricacies of these tools and services. Put everything in a black box and provide detailed documentation.
The second reason for using these tools is that you don't need people from other teams to make any critical changes to your services and systems because they might cause you trouble. This is especially true for services at the bottom (tiers 1, 2, and 3). Asking them to make changes at these levels, or demanding (or worse, expecting) them to be experts in something, can be disastrous. As an example of configuration management, microservices team developers with no relevant expertise make changes to the system configuration that can lead to large-scale service outages because the changes they make may affect more than their own services.
development cycle
Streamlining, standardizing, and automating the development process as developers modify existing microservices or build new microservices can greatly improve development efficiency. Something needs to be placed at layer 3 of the microservices ecosystem to make stable and reliable development possible.
The first is a centralized version control system that stores all code and allows tracking, version management, and searching of code. This can be done with tools such as GitHub or your own git or svn code repositories, which can be integrated with collaboration tools such as Phabrictor to simplify code maintenance and review.
The second is stable and efficient development environment. It is well known that implementing such a development environment in a microservices ecosystem is difficult because of the complexity of the dependencies between microservices. But they are all fundamental factors and we cannot avoid them. Some engineering organizations prefer to do development work locally (on developers 'computers), but this leads to poor deployment because developers don't know how the code they modify will be deployed to production. The most reliable way to build a development environment is to create a mirror image of the production environment (not for pre-production, feedback gathering, or production) that contains all the complex dependency chains.
Test, build, package and release
Testing, building, packaging, and releasing during development should be standardized and centralized as much as possible. After development, when code changes are committed, test cases need to be run, and then the new version to be released is automatically built and packaged. Continuous integration tools can come in handy at this point, and off-the-shelf solutions such as Jenkins are both functional and easy to use. These tools automate the entire process, leaving little room for human error.
deployment pipeline
After development, testing, build, packaging, and release, the deployment pipeline is another process that takes new code to production. In a microservices ecosystem, deployments can become extremely complex in a very short time, and hundreds of deployments per day are not uncommon. Development teams need to build tools for development and standardize the development process.
Logging and monitoring
All microservices should log important information about their requests and responses. Because microservices change so quickly, if something goes wrong with the system, it becomes difficult to reconstruct the state of the system at that time, making it difficult to reproduce defects in the code. Using microservice-level logs can help developers better understand the state of their services at some point in the past or at the current time. Monitoring key metrics of microservices at the microservice level serves the same purpose: accurate real-time monitoring helps developers understand the status and health of services.
Layer 4: Microservices
The top layer of the microservices ecosystem is the microservices layer. This layer is where microservices and everything related to microservices are located, and it is completely separate from the underlying infrastructure layers, such as hardware, deployment, service discovery, Load Balancer, and communications. The only part of the microservices layer that is not separated is the configuration done using self-service tools.
In software engineering, the configuration of an application is typically centralized, and the configuration for a tool or tools (configuration management, resource isolation, or deployment tools) can be stored with these tools. For example, custom deployment configurations for an application are typically saved with the deployment tool code rather than with the application code. This approach is fine for monolithic application architectures or small microservices ecosystems, but in large microservices ecosystems with a large number of microservices and internal tools (each with custom configurations), it can cause confusion: microservices teams at the top need to modify tool code at the bottom, and they often forget where configuration information is (or isn't) included. To solve this problem, microservices-related configuration can be placed in the microservices code base and then opened up to the underlying tools and systems.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.