Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the architecture of the web design pattern

2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "what architecture does the web design pattern have". In the daily operation, I believe many people have doubts about the architecture of the web design pattern. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "what architecture does the web design pattern have?" Next, please follow the editor to study!

I. what is the architecture?

I think ten people have eleven answers to this question, because the other one is the result of everyone's compromise. Haha, I understand that architecture is the skeleton.

The support of the human body is mainly borne by the skeleton, followed by the muscles, nerves and skin above it. Architecture is as important to software as skeletons are to the human body.

Second, what is the design pattern

I have asked the interviewer dozens of times and answered this question in a variety of ways. in my opinion, patterns are experience, and patterns involve experience. With these experiences, we can use specific designs and combinatorial designs in specific situations. This can greatly save our design time and improve work efficiency.

As an old farmer, the manager has a lot of system architecture design. Next, I will share some architectural design patterns used in my work with you. I hope you will take fewer detours. Overall, there are eight, namely:

1. Single database and single application mode: the simplest one, which you may have seen before.

2. Content distribution mode: it is widely used at present.

3. Query classification mode: for large concurrent queries and services.

4. Micro-service model: suitable for the disassembly of complex business models

5. Multi-level cache mode: you can play with the cache very well.

6. sub-database and sub-table mode: solve single and database bottlenecks

7. Elastic expansion mode: one of the methods to solve the non-uniform flow of wave crest and valley business.

8. Multi-computer room mode: a method to solve high availability and high performance

Third, single database and single application mode

This is the simplest design pattern. Most of our undergraduate graduation projects and some small applications are basically this pattern. The general design of this pattern is shown in the following figure:

As shown in the figure above, this model generally has only one database, one business application layer, and one background management system. All the business is done with the business layer, and all the data is stored in a database. Better, there will be database synchronization, although simple, but not useless.

Advantages: simple structure, fast development speed, simple implementation, can be used for the first version of the product and other prototype verification requirements.

Disadvantages: poor performance, basically no high availability, poor scalability, not suitable for large-scale deployment, applications and other production environments.

IV. Content distribution model

Basically all large websites adopt this design pattern more or less. A common application scenario is to use CDN technology to distribute static resources such as web pages, images, CSS, JS and other static resources to the nearest server to the user. The general design of this pattern is shown in the following figure:

As shown in the figure above, this model has one more CDN and one cloud storage OSS (Qiniu, Zaipai, etc.) than the single database model. A classic application process (taking users' requirements for uploading and viewing pictures as an example:)

1. When uploading, the user selects a picture on the local machine to upload.

2. The program will upload the image to the cloud storage OSS and return a URL of the image.

3. The program stores the URL string in the business database, and the upload is completed.

4. When viewing, the program gets the URL of the picture from the business database

5. The program queries the URL image server through DNS.

6. The intelligent DNS will parse the URL and get the address An of the user's nearest server (or cluster)

7. Then return the picture on server A to the program

8. The program displays the picture and views the completion.

As can be seen from the above, the key to this model is intelligent DNS, which can parse the nearest server to the user. The operation principle is roughly as follows: get the request location B according to the requester's IP, and then calculate or configure the server C that is closest to B or the shortest communication time, and then return the IP address of C to the requester. The advantages and disadvantages of this model are as follows:

Advantages: fast download of resources, no need for too much development and configuration, but also reduce the back-end server storage pressure on resources, reduce the use of bandwidth.

Disadvantages: at present, the price of OSS and CDN is a little expensive, which is only suitable for small and medium-sized applications. In addition, due to network transmission delay and CDN synchronization strategy, there will be some problems of consistency and slow update.

Fifth, query separation mode

This mode mainly solves the problem of excessive pressure on single and database, resulting in slow business or even timeout, and longer query time, including query requests that require a large number of database server computing resources. it can be said that this is the upgraded version of the single-database application mode, and it is also the only way in the iterative evolution of the technical architecture.

The general design of this pattern is shown below:

As shown in the figure above, this model has several more parts than the single-database application mode and the content distribution mode, one is the master-slave separation of the business database, the other is the introduction of ES, why? Which pain points are solved are described in the following specific business requirements scenarios.

Scenario 1: full-text keyword search

I think most applications will have this requirement. If you use traditional database technology, most of you may use like as a sql statement. The advanced ones are the records related to participle first and then participle index. The performance problems of sql statements and the full table scan mechanism lead to very serious performance problems, which are rarely seen now.

ES is easier to configure and easier to use than Solr, so he is chosen here. In addition, ES supports scale-out, with no performance bottleneck in theory. At the same time, it also supports various plug-ins, custom word splitters, etc., with strong expansibility. Here, using ES can not only replace the database to complete the full retrieval function, but also achieve functions such as paging, sorting, grouping, faceting and so on. Specifically, please learn it by yourself. how to use it? A general process goes like this:

1. The server stores a piece of business data in the database

2. The server sends the data to ES asynchronously.

3. ES puts the record in its own index database according to the rules and configuration.

4. When querying by the client, the server sends the request to ES, gets the data, assembles and assembles the data according to the requirements, and returns it to the client.

How to use it in practice, please also ask students to make combinations and choices according to the actual situation.

Scenario 2: a large number of ordinary queries

This scenario refers to most of the auxiliary queries in our business, such as querying the balance when withdrawing money, querying the user's record according to the user's ID, obtaining the user's latest withdrawal record, and so on. We must use it every day, and we also use a lot of it. At the same time, we also had a lot of write requests, resulting in a large number of write and query operations to the same database, and then the database hung up, the system failed, the leader was angry, he was fired, he couldn't afford the mortgage, he slept on the street, and his wife ran away with someone else.

Do not dare to think, so we must disperse the pressure of the database, a more mature solution in the industry is to separate the read and write of the database, enter the main database when writing, and read the sub-database when reading. In this way, the pressure will be scattered to different databases, if a reading library performance is not good, can not bear, you can have more than one master, horizontal expansion, can be described as a good medicine ah! So how do you use it? A general process goes like this:

1. The server stores a piece of data in the database.

2. Copy this data to the slave database synchronously or asynchronously or semi-synchronously

3. When the server reads the data, it reads the corresponding data directly from the library.

It is relatively simple, some smart, thoughtful, self-motivated students may have found the problem, including the first scenario introduced above, that is, the problem of delay, such as: if I read the data before it reaches the database, then I can't read it. There's going to be a problem. For this problem, different companies have different ideas and methods to solve this problem. a common solution is that they cannot read the main library, of course, there are prerequisites for this, but the specific solutions will not be carried out here one by one. I may explain various solutions in detail in the following sharing.

In addition, with regard to the database replication mode, students are asked to learn by themselves. It is too much to say here. It is time to summarize the advantages and disadvantages of this model, as follows:

Advantages: reduce the pressure on the database, theoretically provide infinitely high read performance, brief introduction to improve business (write) performance, dedicated query, index, full-text (word segmentation) solutions.

Disadvantages: data delay, guarantee of data consistency.

VI. Micro-service model

The above model seems good, solved the performance problem, I can not Lu Su street, my wife is still mine, , but the natural complexity of the software system determines, in addition to performance, there are a large number of other problems such as high availability, robustness and other problems waiting for us to solve, coupled with various departments of tearing, wrangling, let us programmers even worse, so, go on …...

The micro-service model can be said to be a recent hot spot. Companies of various colors, large and small, domestic and foreign are advocating and practicing this model, but most of them do not understand why they want to do so. I do not know what the advantages and disadvantages of doing so. Here, I will use my own personal practice to express my views on this model. Do not like it. With the increase of business and personnel, the problems encountered are as follows:

1. The number of write requests for single and database has increased greatly, resulting in greater pressure on the database.

2. Once the database is down, the whole business is dead.

3. More and more business codes, all in one GIT, are becoming more and more difficult to maintain.

4. The code is corrupted seriously and the smell is getting stronger and stronger.

5. the launch is becoming more and more frequent, often with a small function modification, which requires the whole large project to be recompiled.

6. There are more and more departments. Which department should change which thing in the big project?

7. Some other peripheral systems are directly connected to the database, so that once the database structure changes, all related systems have to be notified, even those that are not sensitive to modification.

8, each application server needs to open all the permissions, network, FTP, all kinds of, because each server deploys the same application.

9. As an architect, I have lost control of the system.

In order to solve the above problems, we use the micro-service model, the general design of which is as follows:

As shown in the figure above, I divide the business into blocks, do vertical segmentation, and cut into independent systems. Each system is derived from its own library, cache, ES and other auxiliary systems. The real-time interaction between systems through RPC, asynchronous interaction through MQ, through this combination, together to complete the entire system function.

So will this really solve the above problems? Don't play dumb, one by one.

For problem 1, the pressure on the system is dispersed because it is divided into multiple subsystems, and each subsystem has its own database instance, so the pressure on the database becomes less.

For problem 2, the database of a subsystem An is down, which only affects system An and those functions that use system A, and not all functions are unavailable, thus solving the situation in which a database is down and all functions are unavailable.

For problems 3 and 4, but also because the split has been solved, each subsystem has its own independent GIT code base, which will not affect each other. General modules can be solved in the form of libraries, services and platforms.

For problem 5, subsystem A has changed and needs to be online, so we just need to compile An and then go online, and there is no need for other systems to do anything leading to it.

For question 6, conforms to Conway's law, what our department should do and what to output is also exposed in the form of services. Our department only needs to do our department's duties and software functions well.

For question 7, all the requirements that require our data are released through the interface, and the customer obtains the data through the interface, thus shielding the underlying database structure and even the data source. Our department only needs to ensure that the interface contract of our department has not changed, and the new interface will not affect the old interface.

For problem 8, different subsystems require different permissions, and this problem is solved gracefully.

For problem 9, to temporarily control the complexity, I just need to control the big aspects, define the system boundaries, interfaces, and large processes, and then divide them into different parts, break them down one by one, and combine vertical and horizontal.

For now, all the problems have been solved! bingo!

However, there are many other side effects that will follow, such as the ultra-high stability and performance of RPC and MQ, network latency, data consistency and so on.

In addition, for this model, the most difficult to grasp is the degree, remember not to split too much, I have seen a function of a subsystem, hundreds of methods divided into hundreds of subsystems, is really too excessive. In practice, a more feasible method is: can not be divided, unless there is a very necessary reason!

Advantages: relatively high performance, strong scalability, high availability, suitable for medium-sized company architecture.

Disadvantages: complex and difficult to grasp. It not only needs a person who can control the general direction, large process and overall technology at the high level, but also needs to be able to develop targeted for each subsystem. If you do not have a good grasp of the degree or abuse, this model is counterproductive!

7. Multi-level caching mode

This model can be said to be a commonly used strategy to deal with ultra-high query pressure. The basic idea is to cache whatever can be cached on all links, as shown in the following figure:

As shown in the figure above, the cache is generally added in three places, one at the client, one at the API gateway, and one at the specific back-end business office, which are described below:

Caching at the client: caching in this place is arguably the best-no latency. Because there is no need to go through a long network chain to the back-end business to obtain data, resulting in excessive loading time, customer loss and other losses, although there is CDN support, but there is still a network delay from the client to CDN, although it is not large, the specific technology depends on different clients. For WEB, there are browser local cache, Cookie, Storage, cache strategy and other technologies. For APP, there are local databases, local files, local memory, and in-process cache support. Students who are interested in the various technologies mentioned above can continue to study. If the client cache fails, they will go to the back-end business to get the data. Generally speaking, there will be an API gateway, and it is also very important to add cache here.

Back-end business processing: I don't need to say much about this, we should know almost all about Redis, Memcache, Jvm and so on.

In practice, it is necessary to combine the specific situation and make comprehensive use of all levels of caching technology, so that all kinds of requests can be solved before reaching the back-end business, so as to reduce the pressure on the back-end server, reduce the bandwidth, and enhance the user experience. As for whether there are only these three places to add cache, I think to learn and use it, mental skill is more important than swordsmanship! Summarize the pros and cons of this model:

Advantages: resist a large number of read requests and reduce back-end pressure.

Disadvantages: the problem of data consistency is more prominent, which is prone to avalanches, that is, if the client cache expires and the API gateway cache fails, then all a large number of requests are immediately pressed on the back-end business system, and the consequences can be imagined.

VIII. Sub-database and sub-table mode

This mode mainly solves the problem of excessive pressure on writing, reading and storage of a single table, which leads to slow business or even timeout, transaction failure and insufficient capacity. There are generally two kinds of horizontal segmentation and vertical segmentation, which are mainly introduced here. This model is also the only way for the iterative evolution of technical architecture.

The general design of this pattern is shown in the following figure:

As shown in the red section above, a table is divided into several different libraries to share the pressure. Is it very general? Haha, then we will explain it in detail. First of all, we will clarify several concepts as follows:

Host: hardware, which refers to a physical machine, or virtual machine, with its own CPU, memory, hard disk, etc.

Instance: database instance, such as a MySql service process, a host can have multiple instances, different instances have different processes and listen on different ports.

Library: a collection of tables, such as a school library, that may contain teacher tables, student tables, canteen tables, and so on, which are in a single library. There can be multiple libraries in an instance, and libraries can be distinguished by library names.

Table: the table in the library, needless to say, do not understand, do not need to look down, do not explain.

So how to distribute the single table? How on earth is it distributed? Where will it be distributed? Here are a few practices in work to share:

Host: this is the most important and important point. In essence, sub-library and sub-table are caused by insufficient computing and storage resources, which are mainly provided by physical machines and hosts. After all, there are no available computing resources. The effect is not very good.

Example: the instance controls the number of connections, and CPU, memory, hard disk and network IO are also indirectly affected by OS restrictions. There will be the phenomenon of hot instances, that is, some instances are very busy and some are very idle. A typical phenomenon is that due to the slow response of a single table, the connection pool is full, so other businesses are affected. At this point, dividing the table into different instances has some effect.

Library: generally speaking, the sub-library is adopted because of the limit of the maximum number of single tables in a single database.

Table: excessive pressure of single meter, large index, large capacity, lock of single meter. According to the above, the single table is divided into different tables horizontally.

In large-scale applications, there is only one instance on a host, and there is only one library in an instance, and the library = instance = host, so there is the abbreviation of sub-database and sub-table.

Now that we know this basic theory, how exactly is it done? How does logic run? Let's take an example to explain.

This requirement is very simple, user table (user), a single table data volume of 100 million, query, insertion, storage problems, how to do?

First of all, analyze the problem, which is obviously caused by the large amount of data.

Secondly, the design can be divided into 10 libraries, so that the amount of data in each database is reduced to 1KW, and the amount of 1KW data in a single table is still a little large, and it is not conducive to the growth of the amount in the future, so that each database is divided into 100tables, so that the amount of data in each single table is 10W, and there are some overflows for queries, index updates, single table file size, and opening speed. Next, call the IT department, ask for 10 physical machines, expand the database.

Finally, logical implementation, this should be the most learned place. The first thing is to write data. You need to know which sub-database and sub-table to write to, and the reading is the same. Therefore, you need to have a request route that is responsible for distributing and converting requests to different database tables. Generally, there is the concept of routing rules.

What do you think? easy, right? Ha ha. Speaking of the problem of this mode, it mainly brings transaction problems, because the transaction can not be completed because of the sub-database and table, and the distributed transaction is too cumbersome, so there needs to be a certain strategy to ensure that the transaction can be completed in this case. The strategies adopted are as follows: final consistency, replication, special design, etc. Then there is the transformation of the business code, some related queries need to be modified, and some single-table orderBy problems need special treatment, including groupBy statements. How to solve these side effects cannot be explained clearly in one or two sentences. I will talk about these alone when I have time in the future.

It's time to summarize the pros and cons of this model, as follows:

Advantages: reduce the pressure on single tables in the database.

Disadvantages: transaction guarantee is difficult, business logic needs to be modified a lot.

IX. Elastic expansion mode

This mode mainly solves the problem that the arrival of sudden traffic leads to the failure of horizontal expansion or too slow horizontal expansion, which affects the business and collapses the whole station. This model is a relatively advanced technology, and it is also a technology that major companies are currently studying and trying out. As of today, architects with this idea are good enough to get a higher salary, not to mention those that have been practiced or even implemented the underlying system, so, you know...

The general design of this pattern is shown below:

As shown in the figure above, an auto scaling service has been added to dynamically add and decrease instances. The principle is very simple, but what problem does this model solve? First of all, let's talk about the origin and meaning.

Before the arrival of the annual double 11, 618 or some big promotions, we will do the following work for the arrival of large traffic: prepare 10 times or more machines in advance, even if you don't need them, just in case. This wastes a lot of resources. Each machine is configured, debugged and drained so that all machines are available, which wastes a lot of manpower and material resources and is more prone to errors. If the machine is not fully prepared, then you have to work overtime to repeat the above work, which is very easy to make mistakes, cause dissatisfaction from the leaders, and have no time to go home with your wife, and then your wife will. Ha ha

After Singles Day holiday, we have to do manual downsizing, which is very hard. Usually there will be many promotions in a year, so we will always be like this, it is really annoying!

Most seriously, the sudden outbreak of heavy traffic will catch us off guard, and it is normal to expand capacity in the middle of the night. For this reason, we become lazy and want more machines to be ready, and there are a large number of machines with 1% CPU utilization.

Believe me, you must be shocked if you are the boss!

Haha, so how to change this situation? Please keep looking.

To this end, we first integrate all computing resources into the concept of resource pool, and then dynamically obtain resources from the resource pool through some strategies, monitoring and services, and then put them back into the pool for use by other systems. The two more mature resource pool schemes are VM and docker, each of which has its own strong ecology. The monitoring points include CPU, memory, hard disk, network IO, quality of service and so on. According to these points, combined with some reservation, expansion and contraction strategies, automatic contraction can be realized simply. How's it going? Isn't that amazing? In-depth content I will describe in detail in later articles, it is time to summarize the advantages and disadvantages of the following model. As follows:

Advantages: flexibility, on-demand computing, fully optimize enterprise computing resources.

Disadvantages: the application should be transformed horizontally from the architecture layer, relying on more low-level supporting facilities, and high requirements for technical level, strength and application scale.

X. multi-computer room mode

This model mainly solves the problems of high performance and high availability in different regions.

With the continuous increase of application users, the user base is distributed all over the world. If the servers are deployed in one place, such as Beijing, then users in the United States will use the application very slowly, because each request needs to go through the submarine optical cable for about a second, which is not good for the user's physical examination. Use multiple computer room deployments. This pattern is generally designed as shown in the following figure:

As shown in the figure above, a typical user request process is as follows:

The user requests a connection A

Intelligently parse to the nearest computer room B through DNS

Connect to A using B computer room service

Do you think it's easy, nothing? In fact, the problem here is not as simple as it seems. Let's go one by one.

First of all, there is the problem of data synchronization. Data generated in China should be synchronized to the United States, as well as in the United States. Data synchronization will involve data version, consistency, update discarding, deletion and other issues.

The second is the problem of request routing for multiple computer rooms in one place. Typically, as shown in the figure above, the computer rooms in Beijing and Hangzhou in China. If the computer room in Beijing is down, then all requests sent to the computer room in Beijing should be forwarded to the computer room in Hangzhou through routing. This problem also exists in other places.

Therefore, the multi-computer room mode, that is, more work in different places is not so simple, it is just a start, and the specific pits will be introduced in the following article.

It is time to summarize the advantages and disadvantages of this model, as follows:

Advantages: high availability, high performance, live in different places.

Disadvantages: data synchronization, data consistency, request routing.

At this point, the study of "what is the architecture of the web design pattern" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report