Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Micro-service transformation of legacy systems from 3 million lines to 500000 lines of code

2025-04-13 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

In traditional enterprises and even Internet enterprises, there are often a large number of legacy systems, most of which can work properly, and some may run key businesses or hold core data. However, most legacy systems often have the characteristics of outdated technology, complex code, difficult to modify and so on. I once maintained a website implemented by Perl, which had been working for more than a decade before it was decoupled in 2015, making great contributions to the company's occupation of the market. However, the technology is outdated and difficult to maintain, and finally fades out in the process of micro-service.

It can be seen that with the passage of time, the cost of maintenance and management of legacy systems becomes higher and higher. In the process of the overall transition to the micro-service architecture, these legacy systems are like "obstacles in the way", blocking the transformation of micro-services. How to transform these legacy systems in a more secure, efficient and low-cost way without affecting the business, so that they can be smoothly integrated into the micro-service architecture and make full use of the advantages of the micro-service architecture? This chapter will describe in detail how to solve the problem of micro-service transformation of legacy systems.

I. Overview of legacy systems 1. What is a legacy system?

"A legacy system is an old method, technology, computer system, or application that means that the system is outdated or needs to be replaced."

This is the Wikipedia definition of legacy systems, with a clear explanation of the meaning of "legacy", that is, systems that are outdated or need to be replaced. Because of this, legacy systems usually have some similar characteristics.

Huge single applications: most of the legacy systems are "giant" systems accumulated over the years, presented in the form of single applications.

Difficult to modify: neglect of refactoring in the process of code accumulation for many years, resulting in poor readability and extensibility of the code.

Maintenance costs are high: it can be difficult to find the root causes of bug in large amounts of code. In addition, operation and maintenance is also a problem, such as the Perl system mentioned at the beginning of this chapter, there is no APM tool to support its monitoring.

The cost of learning is high: because the design is outdated or the technology is outdated, it is difficult to find people who may understand the technology and business of legacy systems.

Lack of quality assurance: due to the lack of investment in the past, many functions basically do not have automated testing to ensure quality.

Over time, these problems may put more and more burden on the team until they cannot afford their management and maintenance costs. Therefore, how to transform the legacy system under the micro-service architecture is a problem that most enterprises have to face when facing the micro-service transformation, and there is an urgent need to consider and solve it as soon as possible in order to avoid the concentrated outbreak of problems after accumulating to a certain extent.

two。 Is it possible to directly rewrite the legacy system?

When faced with the transformation of legacy systems, many people may first think of a straightforward solution: override rewriting and replace the legacy system with a completely new system at once. In this way, a variety of problems will always be found in the landing process, resulting in the new system can not be switched smoothly, and the old system can not be completely replaced. The common problems are as follows:

It is difficult to go online and the risk of business congestion is high: in the process of legacy system rewriting, there are often new requirements to modify the functions of legacy systems, which are either blocked before the new system is launched. or need to pay double the effort to implement both the legacy system and the new system, no matter which choice is difficult for the team to accept.

The influence surface is uncontrollable and the system transformation cycle is long: legacy systems often run key business or hold core data, which will be called by multiple upper-level services. Once it is decided to start rewriting, the impact can not be assessed, and it needs to be extremely careful and thoughtful, which will objectively cause the system transformation cycle to be extended indefinitely.

The learning cost is high and the knowledge transfer cycle is long: the longer the transformation cycle of the legacy system, the higher the learning cost of the system. In this process, with the normal loss of personnel, the team will gradually become less familiar with the business and technology, and the newcomers need to spend more time to familiarize themselves with the necessary knowledge and technology in the process of legacy system transformation.

To sum up, it is unrealistic to hope to directly rewrite the legacy system and carry out one-time replacement to solve the problem of micro-service transformation. It is necessary to explore a way of sustainable evolution to achieve fast, low-cost and controllable transformation effect.

II. Transformation strategy of legacy system

The transformation of legacy systems, not only to not affect the business, to achieve a smooth and secure transition, but also to ensure efficient and steady progress, which is no small challenge for software developers.

Combined with the author's experience, the following ideas can be adopted in the process of legacy system transformation:

Follow the "evolutionary transformation process", give priority to the most valuable parts, and ensure that the risks in the transformation process can be controlled.

The "hanger mode" is adopted to ensure a smooth transition between the new and old systems through gradual replacement rather than one-time replacement.

The "sidecar mode" is adopted to connect the legacy systems that are not easy to transform into the micro-service environment.

1. Progressive transformation flow

The evolutionary transformation process is a process of transforming legacy systems in a step-by-step manner, as shown in figure 6-1.

Figure 6-1 Progressive transformation process for legacy systems

1.1 Building a service roadmap

Before starting the transformation, we first need to construct a service roadmap to show the expected services and their dependencies in an ideal situation. The map is used as a road sign to guide us to carry out service-oriented transformation. Of course, the service roadmap can be improved with the evolution of the development process without demanding integrity. The process of building a road map can be participated by architects, business analysts, and technical leaders. The method of building a road map can be referred to the "Service split Strategy" in section 3.1.1. After the service roadmap is built, it should be shared among the team and receive feedback from all parties.

1.2 Service selection

After having the service road map, you may find that the transformation process is numerous and complicated, and you don't know how to choose the right part to make the transformation first. At this point, you might as well follow the principle of value maximization and formulate a priority split strategy from a variety of angles, such as:

Give priority to split the relatively independent part, the coupling between the independent business and the old system is relatively small, and it is relatively easy to implement.

Priority splitting of frequently changing parts can be achieved by splitting into new services to achieve independent deployment and rapid online, which is of great value to improve the overall delivery efficiency of the team.

Priority is given to splitting parts with special resource requirements, such as splitting computing-intensive tasks into new services, and properly taking advantage of the elastic scalability of cloud infrastructure to scale up new service instances. help to improve the overall performance of the system and reduce costs.

1.3 Service transformation

After selecting the priority parts that need to be modified, we begin to migrate these business functions to the micro-service architecture. In the process of transformation, the following problems usually need to be solved:

New and old systems may need different data sources, or have different database structures, how to solve the problem of synchronization and dependence between data?

When a single old system needs to be split into multiple services, how to achieve a secure progressive split?

According to the different transformation requirements and data dependence, it can be divided into three different transformation scenarios, and the technical implementation in each scenario is different.

1.4 Service Verification

After the business functions are migrated to the micro-service architecture, it is necessary to verify whether the new services meet the business requirements. After the new service is put into use and stable, the original code module can be removed from the legacy system, and the data synchronization task can be removed if necessary.

1.5 iterative optimization

So far, the micro-service transformation of part of the legacy system has been completed, and the rest can be iterated according to a similar method, re-examine the service roadmap, select the next business that needs to be modified, and continue to optimize until the established micro-service transformation goal is achieved.

two。 Hanger mode

Before the micro-service architecture became popular, an approach called "abstract branching" was commonly used for incremental large-scale software transformation, as shown in figure 6-2.

Figure 6-2 Abstract Branch

Component decoupling: create an abstraction layer between the component to be replaced and the component consumer, which only declares abstract interfaces or protocols as needed, and the specific implementation remains in the component to be replaced. In this way, the decoupling between the component to be replaced and the component consumer is realized by introducing the abstraction layer.

Coexistence of new and old components: developing new components to implement the same set of abstract layer interfaces and working in the system at the same time as the components to be replaced, but initially only a small part of the functions are implemented in the new components, or only a small part of the production environment traffic is imported into the new components.

Gradual replacement: as the functionality becomes more complete and the technology matures, the new components bear more and more traffic from the production environment. After a comprehensive test, when the new component can completely replace the old component, the old component can be officially offline. At this point, you can also remove the abstraction layer if necessary.

We can also learn from the idea of "abstract branch" for the micro-service transformation strategy of legacy systems, but under the micro-service architecture, the abstract layer is implemented by an independent facade (Facade) service, which forwards the request to the new system or the old system, which plays a routing role.

In the process of transformation, new and old systems will exist at the same time, working together to provide external value. With the progress of the transformation process, the new system provides more and more functions and value, gradually replacing the functions of the original legacy system, and more and more user traffic will be imported into the new system. Finally, when the original legacy system is no longer in use, it can be safely offline, and the facade service can also be safely removed.

This smooth transition is often referred to as the "hanger mode". The process in which legacy systems are gradually "hanged" is shown in figure 6-3.

Figure 6-3 hanger mode

The hanger mode is especially suitable for gradual transformation of large legacy systems with high complexity, but the following issues should also be paid attention to during the migration process:

Consider the way data is shared or synchronized between new and legacy systems.

Ensure that the hanger's facade service does not have a single point of failure or become a performance "bottleneck".

3. Sidecar mode

There are a large number of legacy systems in traditional enterprises, the cost of micro-service transformation of all these legacy systems will be very high, it is not realistic, and some legacy systems can not even be completely reformed. For these systems, our choice is not necessarily to transform them into micro-services, but to connect them to the micro-service environment and cooperate with other services to achieve business requirements.

However, it is not easy to connect various legacy systems to the micro-service environment, and there are the following common problems:

Some changes need to be made to the original code, but the legacy system itself is sometimes difficult to modify, and if the original system is made up of multiple languages, consider a solution for each language.

If the access code runs in the same process as the original system, it means that there is no good isolation, and the original system may not work because of a small problem with the access code. So is there a low-cost way to connect legacy systems to a micro-service environment? One way is to use the sidecar mode, as shown in figure 6-4. The word "sidecar" comes from a motorcycle with a sidecar.

Figure 6-4 sidecar mode

As shown in figure 6-4, specific to the legacy system access scenario, the sidecar mode is to centralize the access function codes together as an independent process or service to provide an isomorphic access interface for legacy systems in different languages. In the deployment structure, the sidecar service is closely related to the original legacy system, and it is where the original legacy system is. For each instance of the original legacy system application, a sidecar instance is deployed and hosted nearby. A sidecar is a process or service that supports deployment with the original application.

The advantages of using sidecar mode are as follows:

Sidecar service is an independent process or service, which has nothing to do with the implementation language of the original legacy system, and there is no need to develop a sidecar for each language.

Because of the non-intrusive access method, there is usually no need to rewrite the code of the original legacy system, and zero modification cost can be achieved.

The sidecar service is deployed adjacent to the original legacy system, which can access the same resources as the original system, and sometimes can be used as the access agent of the monitoring service.

Although some communication costs have been increased, due to the adjacent deployment of the sidecar and the original system, the increased communication costs are often very small and the delay is very low.

When using sidecar mode, you should also pay attention to the following problems:

Pay attention to the deployment adjacent to the original system to reduce the communication delay.

Note that language-independent communication mechanisms, such as REST, are used between processes.

Consider containerized deployment, such as deploying cross-bucket services and legacy systems in the same Pod.

Consider putting the function into the sidecar as a separate service or as a traditional daemon.

ServiceMesh adopts the idea of sidecar mode and deploys an agent at the near end of each service to help legacy systems connect to ServiceMesh and enjoy the benefits of service governance.

III. Legacy system transformation scenario

Before making specific changes, you may encounter the following challenges:

New and old systems may need different data sources, or have different database structures. How to solve the problem of synchronization and dependence between data?

When the old system under single application needs to be split into multiple services, how to achieve secure progressive splitting?

Let's answer these questions one by one according to the common scenarios in the process of legacy system transformation. Common transformation scenarios for legacy systems are shown in figure 6-5.

Figure 6-5 Common retrofit scenarios for legacy systems

As shown in figure 6-5, a specific transformation requirement can be divided into the following two different scenarios.

To achieve new business: new business needs to be added to the system, which is relatively independent of the existing business. According to the relationship between new business and existing business.

Whether there is a data dependency can be divided into two sub-scenarios, that is, independent of existing business data or dependent on existing business data.

Micro-service of existing business: the existing business of the system needs to be transformed into micro-service architecture, such as the separation and service-oriented work of existing business.

Transform scenario 1: implement new business independently from existing business data

Applicable scenario: the new business is independent of the existing business data, and there is no dependence.

Implementation plan: new business can be realized through separate services and databases. Introduce a hanger facade service between the consumer request and the underlying system, which is responsible for routing the request to the legacy system and the new business service according to the business, as shown in figure 6-6.

Figure 6-6 transform scenario 1: business data is independent

Because there is no coupling with the data of legacy systems, the technology stack and tool selection of new services are more flexible, making full use of the heterogeneity of micro-service technology.

Transform scenario 2: rely on existing business data when implementing new business

When the new business has data dependence with the existing business, there are two processing schemes depending on the data holder.

Scenario A: the legacy system holds the data, and the new business accesses the shared data.

Plan B: the new business service holds data and solves the problem of data dependence through data synchronization.

Scenario A: legacy systems hold data, and new businesses access shared data

Applicable scenario: allow new business to access shared data of legacy systems, or the cost of database separation is high.

Implementation: in the case of allowing the new business service to directly access the database of the legacy system, the simplest implementation is for the new business service to connect directly to the legacy system database to obtain data, as shown in figure 6-7.

The implementation cost of this scheme is relatively low, and it can be integrated with the original data source, but the disadvantage also lies in this. Due to the direct use of the database as the integration mode, the new business services are still directly coupled with the original legacy system data. the change of the original database will directly affect the implementation of new business services, and the data access rights and controls for new and old services also need to be taken into consideration. To solve these problems, you can also consider another option, that is, the new business service obtains data through the API provided by the legacy system, as shown in figure 6-8.

Figure 6-7 transform scenario 2: direct access to legacy system data

Figure 6-8 transform scenario 2: accessing legacy system data through API

The advantage of this scheme is that it can isolate data changes through API and avoid the direct coupling between the new business service and the original legacy system data. The data processing logic can be implemented in API to meet the data requirements of new business services. API only provides data domains that are allowed to be accessed by new business services, thus realizing the control of data permissions, which is more suitable for scenarios where direct database access is restricted. However, the extra cost is the need to increase the work of API implementation on the original legacy system, which is sometimes difficult to implement because of the obsolete technology and complex structure of the legacy system.

Scenario B: new business services hold data and solve the problem of data dependence through data synchronization

It is not difficult to see that the methods in scheme An are more or less intrusive to the original system, or directly coupled with the original database, or need to modify the original legacy system. The "open-close principle" in the object-oriented design principle, that is, "open to extension and closed to modification", can help us to achieve a flexible and scalable software architecture, which is also applicable here. Therefore, it can be considered to expand on the basis of the original system, rather than directly modify the original legacy system, so there is another scheme: new business services hold data and solve the problem of data dependence through data synchronization.

Applicable scenario: new business is not allowed to access the shared data of legacy systems, or the cost of database separation is not high.

Implementation scheme: the specific implementation method is to establish an independent database for the use of the new business service, and the new database and the legacy system database are synchronized through a special data synchronization service. the data in the new database is transformed into a form compatible with the legacy system data and migrated there. Data synchronization services, also known as ETL (Extract, Transform, Load) services, are primarily responsible for reading, converting, and synchronizing data, as shown in figure 6-9.

Figure 6-9 transform scenario 2: data synchronization through ETL

The advantage of this scheme is that it uses an independent database, has a low intrusiveness to the original legacy system, and the new business service holds a new database, which is uncoupled directly from the original legacy system. the selection of new business services and new databases can be more flexible. All the transformation logic between new and old data is implemented in ETL services, and any logical changes in data conversion and synchronization are only related to ETL services, and there is no need to modify the original legacy systems and new business services. However, the cost of this solution is the need to implement an additional ETL service. In addition, due to the need for synchronization between two independent databases, it may only satisfy the final consistency of the data, but can not achieve strong consistency. Also consider the availability of ETL services to avoid introducing a new single point of failure.

If you further consider the problem of data isolation and avoid directly exposing the database data of the new service, you can also have the ETL service access the data through the API of the new business service, as shown in figure 6-10. This scheme further decouples the ETL service from the new business database, and the new API can be further reused as an interface for other service consumers to access the database.

Figure 6-10 transform scenario 2: access the API of the new business service through ELT for data synchronization

Transformation scenario 3: microservice existing business

Most legacy systems have carried a lot of key business or core data under the single application architecture, and the one-time refactoring risk is relatively large when microservice transformation is carried out. Therefore, it is more advocated to gradually realize the micro-service transformation through several small reconstructions.

Applicable scenario: micro-service the existing business carried by the legacy system, and realize the gradual reconstruction.

Implementation scheme: taking the transformation of a single application legacy system with multiple modules as an example, it is divided into two services of business data separation through the following three steps.

1. Modify the internal code call to the local REST API call: modify the called function to the REST API to expose, and the caller module completes the function equivalent to the original business by calling the local REST API. At this time, the service has not been split and is still online as a service as a whole.

2. Change the local REST API call to inter-service REST API call: split the service, split the original caller module into independent services, and change the REST API call address to the service address of the target interface. The cost of changing the interface in this step is quite small, and the key point is to enable the split service to be deployed and launched independently. At the same time, in order to avoid introducing too many changes at once, the service after the split at this stage still directly accesses the shared data of the original database.

3. Business data separation: separate the data required by the split service and hold it as the exclusive database of the new service. The problem of data dependence between the two databases can be solved by the data synchronization scheme (ETL service) mentioned above.

According to the above process, the business of the original legacy system is gradually microserviced by continuous iteration as needed, and the schematic diagram of the overall process is shown in figure 6-11.

Figure 6-11 transform scenario 3: microservice existing business

How to split the database of legacy system

In the above transformation scenarios, some steps involve splitting the database of the legacy system. So in the case of multi-service sharing database, how to decide which service's database should be split first? Which split sequence has the least amount of work? One method is to split the data in the order of "least read dependency". The implementation of this method can be summarized into the following three steps:

1. Split the table and decouple the data relationship between services: list the read-write relationship of each service to the table. If only one service writes to a table, there is no need to split the table; when there are multiple services writing to a table, first consider splitting the table into multiple connected tables and converting it into tables that are written separately by a service.

2. Draw a service dependency graph: when each table is written separately by an independent service, a directed graph is constructed according to the following rules, that is, all services are regarded as points in the directed graph. When service A needs to read data from the database of service B, draw a directed edge from B to A, indicating that service A depends on service B; and so on, until the entire dependency graph is drawn.

3. Split the library and uncouple the database between services: take the degree of output of each node in the directed graph (that is, the number of edges starting from the node) as the number of dependencies of the service, sort it, and select the service with the least number of dependences. first, the database is decoupled, the database tables under the service are separated, and a data interface is provided in the service for calling services that depend on it.

Repeat step 3 until all databases are split into databases exclusive to each service.

For example, as shown in figure 6-12, there is a set of dependency graphs containing four services, and the corner marker in the upper-right corner of the service indicates the number of dependencies for that service. Knowing that the number of dependencies of the SMS service is 0, which is currently the least dependent service, you can first split the database of the service. Secondly, the databases of the points service, the order service and the account service are split sequentially until all the databases are split into the exclusive databases of the service.

Figure 6-12 illustration of service dependencies

IV. Cases of legacy system transformation

Taking the legacy system that the author has personally participated in as an example, this section introduces how to migrate a large legacy system to micro-service architecture.

1. The system situation before the transformation

The system is a large portal platform used to provide global real estate information search services, the core functions mainly include the following points, as shown in figure 6-13.

Provide users with real estate information search function based on geographical location and keywords.

Provide subscription of housing information based on custom search criteria.

Provide desktop, tablet, mobile and other different ways of access.

Figure 6-13 status of the system

The system is transformed from a general search platform acquired many years ago, the whole is a large-scale single application, using the same code base, the technology stack is mainly Java, the database is Postgre, the search engine uses FASTSearch (historical reasons), the code volume is about 3 million lines.

With the faster and faster evolution of business, the disadvantages of single application lead to higher and higher cost of system change. Even if you modify a few lines of code, you need to go through a lengthy testing and release process before the system release can take effect. In the face of these problems, the team is determined to use the micro-service architecture to implement all the new functions in the future as micro-services, and gradually migrate the original functions of the system to the micro-service architecture.

two。 Transformation process

Step 1: provide data through the legacy system API

At that time, the first feature requirement received by the R & D team was as follows:

Support NativeApp to provide core functions

The first version launched in three months

At this time, the main challenges include the following two:

The main R & D members of the system are overseas, the domestic team has just been established, there are many new people, and the cost of knowledge transfer is high.

The system itself is based on the secondary development on the commercial general search platform, with complex logic, high code coupling and high change cost.

The plan adopted mainly includes the following parts:

Encapsulate existing logic on legacy systems and provide basic API.

Re-implement a service, similar to the BFF schema mentioned in the previous section, to provide data for NativeApp.

The service obtains the required data by accessing the underlying API provided by the legacy system.

The necessary transformations are implemented within the service and provided to the NativeApp interface for rendering.

Taking advantage of the heterogeneity of the micro-service architecture, the new service is named AppService, which is implemented by RubyOnRails and implements the related data conversion logic to provide data API for NativeApp. The new services implemented in this way can not only meet the needs of NativeApp, but also can be developed quickly and deployed independently.

The architecture diagram of the modified system is shown in figure 6-14. Among them, the AppService service provides two groups of API, namely the EstateAPI that provides housing information and the LocAPI that provides geographical location information.

Figure 6-14

Step 1: provide data through the legacy system API

In the way described above, the modification to the original portal platform is greatly reduced, and by implementing the relevant logic on the new service, not only the delivery efficiency is improved, but also the service can be deployed online independently.

Step 2: provide data through legacy system data sources

Next, the R & D team receives the following feature requirements:

Support the information display of real estate publishers.

Support geographical points of interest (POI) information display.

However, these two parts of data are not included in the basic search API of the original portal platform. Faced with the challenge of limited manpower and tight delivery time, if you want to add an interface to the portal platform, the cost of change will be relatively high and may not be delivered on time.

In the face of these challenges, the team decided to avoid direct modifications to the original legacy system as far as possible, and instead implemented the new business logic in the AppService micro service, making the following changes:

Direct access to legacy system data sources: due to insufficient data provided by the basic search API of the original portal platform, let AppServic

By directly accessing the data sources of the legacy system, we can obtain the required data information such as housing sources, publishers, geographical points of interest and so on.

Build the basic search module: build the basic search template (EngineAPI) in AppService, achieve the same function as the basic search API of the original portal platform, and provide data to the upper API.

Build the business layer API: build the business layer API in the AppService, and add two groups of API, namely the POIAPI that provides the data of geographical points of interest and the PubAPI that provides the information of real estate publishers.

Independent geolocation service: the previously relatively independent LocAPI is split as an independent geolocation service LocService.

The modified system architecture diagram is shown in figure 6-15. Through this step, the query function of the basic search data is reconstructed into the AppService micro-service, reducing the modification of the related functions of the original portal platform.

Step 3: split the basic search service and replace the data source

The demand for the product continues to evolve, and next, the team needs to add a geolocation search function that supports polygons for desktop users. However, under the existing architecture, FASTSearch search engine does not support this function, which means that the original search engine needs to be replaced and a lot of changes need to be made to the code of the original portal platform.

How to effectively implement this feature? The specific operation methods are as follows:

1. Split the basic search API as a stand-alone service EngineService, so that the future search-related logic can be implemented in EngineService.

Figure 6-15 step 2: provide data through legacy system data sources

two。 Realize the search based on polygons in EngineService.

3. The search mechanism of the original portal platform is refactored to use EngineService to obtain relevant data (previously using FASTSDK). Based on this approach, the portal platform and AppService are isolated from the underlying search engine, and the underlying data source can be easily replaced.

The implementation of this scheme is shown in figure 6-16.

Figure 6-16 step 3: split the basic search service and replace the data source

Step 4: build mobile-specific back-end services

The team quickly completed the previous requirements in the expected time, and with the rapid development of the business, the next step is to enhance the user experience on the mobile side and the tablet side. However, at present, the implementation logic of mobile phone end and flat board end is relatively outdated, and they are all based on JSP template implementation. If you want to modify it directly, the cost is high.

Considering that there are significant differences between mobile devices and desktop browsers in terms of screen size, performance, and display limitations. Therefore, the back-end requirements of mobile applications are inconsistent with desktop UI.

If you still use the original portal platform logic, you need to provide data for both clients, which greatly increases the cost of maintenance. Therefore, the team draws lessons from the BFF model and splits the specific background requirements of mobile and tablet websites into a separate background service SPA-Web as a back-end implementation. Provide data for the front end. Then, based on the latest front-end technology, the front-end is implemented in a lightweight and more efficient single-page way, as shown in figure 6-17.

Figure 6-17

Step 4: build mobile-specific back-end services

By creating a dedicated back-end service for mobile users, you can make special optimization for mobile users to create the best experience. At the same time, because the new service under the micro-service architecture is small and simple enough, it is easier to modify, deploy and go online, and it is easier to optimize the performance and behavior carefully.

Step 5: continue to partition micro-services based on data

As the team's micro-service practice becomes more and more mature, the architecture of the whole system tends to evolve to a finer granularity.

The typical business scenarios of the system are divided into housing for sale (Buy), housing for sale (Sold), housing for rent (Rent) and so on. Therefore, based on these services, the team continues to divide into separate search API and independent data storage mechanism, each kind of data is stored by an independent ElasticSearch cluster, and uses the front and back end separation mechanism to achieve front and back end decoupling.

The decoupled system is shown in figure 6-18.

Figure 6-18 step 5: divide micro-services by data

The final implemented service, as shown in the following table.

Service name

Main description

UserService

Handle the authentication and authentication of user information

SavedSearchService

Deal with subscription conditions saved by users

POIService

Dealing with POIService related information

LocService

Provide location-related interfaces

BuyService

Provide interfaces related to housing for sale

SoldService

Provide interfaces related to sold houses

RentService

Deal with the logic related to the supply of housing to be rented

For the service support components used in the system, such as service registration discovery mechanism, centralized configuration information and other mechanisms, their implementation is similar to that described in the previous chapter, so I will not repeat it here.

3. Transformation result

As you can see, after a series of steps above, the original portal platform has been gradually migrated to a micro-service system, and only about 500000 lines of about 3 million lines of code have been left, which continues to provide business value. The team's cost of modification and change to the legacy system has been greatly reduced, and the overall delivery cycle has been greatly shortened. The technology upgrade has brought about an improvement in performance and maintainability, fully enjoying the benefits of small and beautiful micro-service architecture.

Summary

This chapter introduces the characteristics, transformation strategies and scenarios of the legacy system, and explains it with an actual combat case. The aim is to help readers master the methods of microservice transformation of legacy systems from the following aspects:

Legacy systems are "systems that need to be replaced", which often have similar characteristics, such as difficult to modify, high cost of learning and maintenance, lack of quality assurance and so on.

It is unrealistic to solve the problem of micro-service transformation by directly rewriting and replacing legacy systems at one time, which may lead to difficulties in on-line, uncontrollable impact, high learning cost and so on.

For the transformation process of legacy systems, the strategy of gradual replacement rather than one-time replacement should be adopted. The "evolutionary transformation process" and "hanger mode" are usually used to ensure that the whole transformation process can be controlled and achieve a smooth transition.

In addition, for the transformation requirements of legacy systems, this chapter subdivides them into three scenarios, such as new business data independence, new business data dependence and existing business services. Through the analysis of these scenarios, we can effectively guide readers to the evolution of micro-services.

This article is excerpted from the book "Architecture and practice of Micro Services (2nd Edition)", published by Wang Lei, Electronic Industry Press.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report