Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the basic knowledge points of micro-service

2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what are the basic knowledge points of micro-service". Interested friends might as well take a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what are the basic knowledge points of micro-services"?

Micro-service foundation

I think microservices as an architecture evolved from the following factors:

At the beginning of the 21st century, a large number of startups used a language like rails to rapidly expand their businesses and teams. After encountering some difficulties, they began to think about what reasonable things they could do in a language.

The development of cloud computing makes it more convenient and easy to get server instance running software.

We all agree to use distributed system architecture, especially network calls to solve problems.

Scaling, easy access to new hardware, distributed systems and networks combine these factors to find that they play a huge role in my concept of "microservice for CRUD". If you expand a company to a level that is close to success, but have trouble expanding your technical / engineering team. Breaking up a large single management into an service-style architecture can go a long way. This is my personal experience.

The main points of microservices look like this:

The service supports independent scaling. If part of the system service has a high load or high capacity demand relative to other services, you can expand the service to meet the demand. This works in a single system architecture (such as compiling code into a war package in Java Web), but it sometimes becomes complicated.

The service allows failures within a controllable range. In cases where each module in your system can be operated independently, you can improve availability by splitting it into different services. For example, a commercial app, if the product's search process crashes, but it can still provide testing services, which will be a good thing. But the practice is much more complicated than the theory, and people have a lot of stupid ideas about micro-services, such as implying that service overlap is of no value. Setting up a controllable failure is not necessary, but it would be better to have it, but it is not easy for a single architecture to do this (setting up independently controllable failure domains).

The service should be able to support the development team to work independently and well in their respective modules. Again, you can do it in this single architecture system, because I did it. But the overall challenge (and the challenges associated with services in monorepo) is that when these "domain" concepts are presented in source code, engineers try to understand the difference between theoretical and actual coding. If I can see all the code, and it can be compiled together, like a deliverable (rather than separate services being built separately), I prefer to think of it as a whole. So I can grab the code from here and use it in another place, get the data from there, use it here, and so on.

I also prepared some notes. People often confuse "Monolith" with "monorepo". An Monolith-style application refers to compiling a set of code into separate service deliverables (additional client deliverables may be generated in the process). You can use configuration files to get deliverables to do almost anything you can imagine, including all the service-type above, but if all the code is not centralized in one repository, doing so will end up creating a large image and confusion, as teams sometimes compile different deliverables based on different build tools and configuration files. I still think this architecture is a single architecture.

Monorepo, or monolith repository, is a model that refers to a single code repository that contains all the code for the system you are coding (most likely excluding the source code for OSS/ external dependencies). The code in the code repository consists of multiple independent application deliverables that can be compiled / packaged and tested independently without the code of the entire warehouse. One of the advantages of monorepo is that when the version of the shared library is modified, it is easier for developers who rely on the shared library to develop without waiting for other teams to update the version of the dependent library. The biggest disadvantage of the monorepo model is that there are not many OSS tools that support it, because most OSS tools are not built that way. So a lot of investment is needed to develop new tools to support the monorepo model.

Micro-service based on CRUD application

Before introducing the evolution of CRUD applications from a single architecture to a micro-service architecture, I will introduce the architectural design required to build a medium-sized CRUD platform. This platform has a good use case that involves "transactions" and "metadata".

Transaction: refers to the behavior that the user has done and you want to persist it, where data consistency is the most valuable. Create,Update,Delete in CRUD operates much less frequently than Read.

Metadata: information that describes the user, but can only be modified by the internal content creator, and rarely by external users (such as reviewers). It usually has high cacheability. Even a certain degree of temporary inconsistency (showing stale data) can sometimes be tolerated.

What else can CRUD-dependent companies do, especially in the area of analysis? Of course, you may need to adjust the results frequently according to the user's behavior when browsing the site and other personalized actions. But it is difficult to adjust the results in real time, because you may not always get enough data from users to achieve the best results, so this is not the first-level concern of the system.

This architecture is relatively simple during the migration process:

Identify independent entities. There are many interesting and practical definitions of Life Beyond Txns in this paper. The sooner you migrate the architecture, the better, otherwise you will have to implement a distributed transaction when the system is bloated. You may want to have data about enterprise products, users, and other services, and then integrate other enterprise-oriented logic and aggregation services.

Logic is abstracted from the entity and integrated into the service entity. Do not change the data model as much as possible, and access the new service entity APIs through redirection.

That's basically it. What you need to do is collect enough user functions, data, and terminology, and then change it into a micro-service architecture and develop new features.

These services are not typical SOA, nor are they very small microservices. Services with data are more complex. You may not want too many services because you want to satisfy user requests without too many network jumps or even distributed transactions (ideally).

Maybe you don't need to develop new services every day, especially if you have a 50-person engineering team and a long-term product roadmap. You may not invest a lot of engineering time in development choreography and tools to "dynamically add a new feature at the click of a button."

Deciding how much time to invest in tools is simple: the time developers spend adding new services automatically VS implementing and maintaining automated programs VS how many new services do you want to add over time? Just compare them. Obviously, if you think it's important to enable people to develop tiny services quickly and frequently, you'd better invest more time and develop more tools. As with all engineering optimization decisions, this does not mean that it is completely correct, but in the foreseeable future, it is correct, and we need to constantly re-evaluate.

In this example, I find that there are many microservices that must be available. For example, I have mentioned the choreography many times above. But if you don't need to start services automatically or migrate services frequently, you don't need dynamic service discovery (load balancers are a good choice if necessary).

It is also not necessary to allow the team to choose the appropriate language, framework, and data store for each service. In fact, doing so for your team may be a headache rather than a boon.

Create a separate data store for each microservice.

Do not let all microservices use the same back-end storage service. Because of the use of a single data store, microservices written by different teams can easily share database structures and reduce a lot of repetitive work. However, if a team updates the database structure, other services that use that structure also need to be adjusted.

This is true, but for smaller teams, sharing database structures can be prohibited by agreeing on rules in advance (if there are concerns, they can be prevented by code review, automated testing, and testing). If you define data-related services very carefully, this is unlikely to happen. Another method is described in the next paragraph:

Separating the data can make data management more complex because separate storage systems are more likely to get out of sync or become inconsistent, and foreign keys can be accidentally changed. At this point you need to add tools to perform master data management (MDM): background detection and repair of data inconsistencies. For example, it can detect the user id of each database to verify that the id is consistent in all databases (there is no duplicate or redundant id in any database). You can write your own tools or buy one. Many commercial versions of relational database management systems (RDBMS) perform these type checks, but they are so coupled that they cannot scale (provenance).

The above paragraph may disappoint many experienced data inspection engineers. It is because of the overhead of data detection that I encourage small organizations to evaluate the way they agree on database sharing before deciding to use completely independent data storage or personal storage. The decision on which data storage service to use in the end can be postponed as needed.

This version of the microservice architecture is expected to extend CRUD applications because it allows you to rewrite code in blocks. You can rewrite the entire system, or simply modify the parts that are most sensitive to scaling. You need to actively participate in issues related to the complexity of distributed systems and think carefully about data and data-based transactions. Maybe you don't need a lot of data pipelines to know where the data needs to be modified.

Do I have to extend this with microservices? The answer is likely to be no, but that doesn't mean it's a bad practice to use microservices to extend the system. But using an extreme micro-service model may be a bad idea because you probably don't want to split your data in a distributed transaction.

Micro-service based on data processing stream

Now let's discuss a very different use case. This example is not the CRUD application you are familiar with, nor does it include bloated enterprise rules and transaction updates. Instead, this case contains a large number of data streams. It accesses a lot of small data from different data sources, and the small data converges into a large amount of data. There is not only a large amount of data, but also a large number of services to consume, modify, or store it for further use.

The main concern is to receive a large amount of ever-changing data, process it in various ways, and present it to the customer in an appropriate view. These are secondary in CRUD applications.

Let's take an SaaS application of aggregate metric (Metrics) information as an example. This app has customers all over the world, and a variety of applications, services, and machines send metric information to aggregators. These customers only need to look at their data, although the total number of data for any one customer may be very large. Our aggregator needs to process these metrics and send them to the program responsible for displaying them to the customer. Customer-oriented applications can manipulate data entered in real time as well as historical data from caches or back-end storage systems. The greatest value of data lies in what happens to the data now / recently in the mobile window.

This architecture has to consider the issue of data capacity from the beginning, which may not be considered in the CRUD world for a long time. In addition, the data itself is constantly updated over time. "stateful" data minimizes updates on transactions, and the most useful data is time series or event logs. Transactional data, such as storing user views, configurations, and so on, are similar to "metadata" in CRUD and rarely change compared to streaming data. Developers spend most of their time managing input streams, providing new input types, calculating input streams, or changing the way they are calculated, rather than dealing with transactional data changes (CRUD).

In this case, you can assume that a service wants to experiment by performing different calculations on specific elements on the data flow. Without modifying the existing code, the experimental service selects a point in time to start listening to the data flow, performs new calculations to get new results, and then pushes the results back to different channels of the data stream. At some point, the experimental program extracts data from the experimental client and returns the experimental results instead of the standard results to the customer. You need to record all the steps for experimental integrity and debugging analysis, and do not require recording of very detailed or transactional information about system events or even users.

In this case, in order to run the experimental program faster, it may be easier to write new code than to change the original service code. In particular, newly developed services do not need to worry about adjusting data consumption or conflict with the calculation results of existing services. This is what I call "data flow-centric microservices".

If managing real-time data streams brings great value to your business, and you have a lot of developers who want to create new services to consume data flow, test data, and produce results, you are certainly willing to invest in new tools to simplify service creation and production environment deployment as much as possible. You will slowly apply the tools to all services. But you should also clearly realize that having dynamic data that can be manipulated and experimented independently is the key to microservice. Cron job as a microservice

It would be my negligence not to mention this point. When all microservices are very easy, it naturally includes the traditional cron job microservices.

Cron job is a good concept. It doesn't treat everything as a "service". You can do this using AWS's CloudWatch Events or dispatching Lambda functions (microservice cron job). You can also use Gearman to schedule cron job, which is a runner that supports queues and asynchronous jobs. Note that your cron job must be idempotent (the result of running the same input twice remains the same). If you have an easier way to run services or basic cron job, that's fine (you don't need microservices).

At this point, I believe you have a deeper understanding of "what are the basic knowledge points of micro services?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 210

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report