Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Practical Analysis of hexagonal structure of Netflix

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the relevant knowledge of "practical Analysis of Netflix's hexagonal Architecture". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

1. Highly integrated from the beginning

About a year ago, our Studio process team began to develop a new application across multiple business areas.

At that time, we faced an interesting challenge:

On the one hand, we need to build the core of the application from scratch, on the other hand, the data we need is distributed in many different systems.

Some of the data we need, such as information about the film, date of production, staff and location, are distributed among many services. Also, they use different protocols, including gRPC, JSON API, GraphQL, and so on.

The existing data is very important to the behavior and business logic of our application. We needed a high degree of integration from the start.

two。 Switchable data source

An early application designed to introduce visibility into our products was designed as a monolithic architecture. In the case that the domain knowledge system has not been established, the single architecture can achieve rapid development and rapid change. Later, it was used by more than 30 developers and more than 300 database tables.

Over time, applications have evolved from a wide range of services to highly specialized products. In this context, the team decided to deconstruct the single architecture into a series of dedicated services.

This decision is not a performance issue, but rather to set boundaries for all of these areas and allow dedicated teams to independently develop services for each specific area.

A large amount of data needed for our new application is still provided by the previous monomer, but we know that the monomer will be broken down one day. We cannot determine the exact time, but we know that this moment is inevitable, so we need to be prepared.

In this way, we can take advantage of some data from monomers at the beginning, because they are still trusted sources, but we should also be prepared to switch to these data sources as soon as the new microservices are launched.

3. Using hexagonal architecture

We need the ability to switch data sources without affecting the business logic, so we need to keep them decoupled.

We decided to build the application based on the principle of hexagonal architecture.

The idea of hexagonal architecture is to put both input and output on the edge of the design. Whether we expose REST or GraphQL API, no matter where we get the data-- through the database, through the gRPC or REST exposed micro-service API, or just a simple CSV file-- should not affect the business logic.

This pattern allows us to isolate the core logic of the application from external concerns. The isolation of the core logic means that we can easily change the details of the data source without significant impact or the need to rewrite a lot of code in the code base.

We also see that another advantage of having clear boundaries in applications is the testing strategy-most of our tests do not rely on protocols that are easy to change when verifying business logic.

4. Define core concepts

Drawing on the hexagonal architecture, the three concepts that define our business logic are entities, repositories, and interchangers.

Entities refers to domain objects (such as a movie or a shooting location) that do not know where they are stored (unlike Active Record or Java Persistence API in Ruby on Rails).

The Repositories is the interface for getting entities and creating and changing them. They hold a series of methods to communicate with the data source and return a single entity or a list of entities. (for example, UserRepository)

Interactors are classes used to orchestrate and perform domain actions (domain action)-- service objects or use case objects can be considered. They implement complex business rules and verification logic for specific domain actions, such as launching a program.

With these three categories of objects, we can define business logic without knowing or caring about where the data is stored, regardless of how the business logic is triggered. Beyond the business logic is the data source and transport layer:

A data source (Data Sources) is an adapter (Adaptor) implemented for different storage. The data source may be an adapter for a SQL database (the Active Record class in Rails or JPA in Java), an elastic search adapter, REST API, or even a simple adapter such as CSV files or Hash. The data source implements the methods defined on the repository and stores the implementation of fetching and pushing data.

The transport layer (Transport Layer) can trigger an interactive device to execute business logic. We treat it as input to the system. The most common transport layer for microservices is the HTTP API layer and a set of controllers (Controller) used to process requests. Once the business logic is extracted to the interactor, we are not coupled to a specific transport layer or controller implementation. The interactor can be triggered not only by the controller, but also by events, cron jobs, or from the command line.

The dependence graph of hexagonal architecture shrinks inward

In a traditional hierarchical architecture, all of our dependencies point in one direction, and each layer above depends on its own lower layer. The transport layer relies on the interactor, which relies on the persistence layer.

In a hexagonal architecture, all dependencies point to the center. Our core business logic knows nothing about the transport layer or data sources. But the transport layer still knows how to use the interactor, and the data source knows how to interface with the repository interface.

In this way, we can prepare for the change of switching to other Studio systems in the future, and when we need to take this step, we can easily complete the task of switching data sources.

5. Switch data sourc

The need to switch data sources came earlier than we expected-our monolithic architecture suddenly encountered a read bottleneck and needed to switch specific reads of an entity to a new version of the micro service exposed on the GraphQL aggregation layer. The microservice is synchronized with the monomer, the data is the same, and they produce the same results when they are read from each service.

We managed to switch data reads from a JSON API to a GraphQL data source within 2 hours.

The reason why we can do this so quickly is mainly due to the hexagonal structure. We didn't let any persistent storage details leak into the business logic. We created a GraphQL data source that implements the repository interface. Therefore, you can start reading data from the new data source with a simple one-line code change.

With proper abstraction, it is easy to change the data source

At this point, we will know that it is right to use the hexagonal structure.

One big advantage of single-line code changes is that they reduce release risk. If the downstream microservice fails during the initial deployment, the rollback will also be very easy. This also allows us to decouple deployment and activation jobs because you can configure which data source to use.

6. Hide data source details

One of the advantages of this architecture is that it allows us to encapsulate the implementation details of the data source.

We encounter a situation where we need an API call that doesn't exist yet-- a service uses an API to get a single resource, but does not implement batch fetching. After communicating with the team providing the API, we learned that it would take some time for the bulk acquisition endpoint to be delivered. Therefore, we decided to use another solution to solve this problem while building this endpoint.

We define a repository method that can obtain multiple resources given multiple record identifiers-and the method sends multiple concurrent calls to downstream services at the initial implementation of the data source. We know that this is a temporary solution, and the next step in the data source implementation is to switch to the new API after the batch API has been built.

Our business logic does not need to understand specific data source restrictions

This design allows us to continue to develop to meet business needs without accumulating too much technical debt and without having to change any business logic afterwards.

7. Testing strategy

When we started to try the hexagonal architecture, we knew we needed to come up with a testing strategy. A prerequisite for improving the speed of development is to have a reliable and very fast test suite. We do not think this is the icing on the cake, but a necessary condition.

We decided to test the application on three different layers:

We tested the interactor, where the core of the business logic exists, but has nothing to do with any type of persistence layer or transport layer. We used dependency injection and mock any type of repository interaction. Here we test the business logic in detail, and most of the tests are located here.

We test the data sources to determine if they are properly integrated with other services, to interface with the repository, and to check their behavior in the event of an error. We try to minimize the number of these tests.

We have integration specifications throughout the stack, from our Transport/API layer to interactors, repositories, data sources, and important downstream services. These specifications test whether we "wire" everything correctly. If a data source is an external API, we will hit the endpoint and record the response (and store it in git) so that our test suite can run quickly with each subsequent call. We will not conduct extensive testing at this layer, and there is usually only one success scenario and one failure scenario per domain action.

We do not test repositories because they are simple interfaces to data source implementations; and we rarely test entities because they are normal objects with properties defined. We will test whether the entity has other methods (no persistence layer is involved here).

We still have room for improvement, for example, we can rely 100% on contract testing instead of any service that ping depends on in the future. With the test suite written in the above way, we can run about 3000 specs in a single process in 100 seconds.

A test suite that can easily run on any machine, it's great to use, and our development team can do day-to-day functional testing without interruption.

8. Delayed decision

Now we can easily switch data sources to different microservices. A key benefit is that we can delay some decisions about whether and how to store data within the application. Depending on the functional use case, we even have the flexibility to determine the type of data storage-either relational or document.

When the project started, we knew very little about the system being built. We should not lock ourselves in an architecture that leads to project paradoxes and unwise decisions.

The decisions we are making now meet our needs and allow us to act quickly. The biggest advantage of the hexagonal architecture is that it allows our applications to adapt flexibly to future needs.

This is the end of the content of "practical Analysis of Netflix's hexagonal Architecture". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 208

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report