In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Most people do not understand the knowledge points of this "Scheme Analysis of Server data precipitation" article, so the editor summarizes the following contents, detailed contents, clear steps, and has a certain reference value. I hope you can get something after reading this article, let's take a look at this "server data precipitation scheme analysis" article.
From the data level, the data can be divided into several dimensions, such as pipeline data, state database, configuration data. The dependence of pipeline data is the lowest, which is basically the expansion of time dimension, so from the perspective of data security, if the impact of lost data on business is still limited, configuration data is at the data dictionary level, and the scope of influence is much smaller. The key is the state data, which is very core, because it only identifies the change of state, if you change to a scenario, such as the amount, then the impact of this dimension is very great.
From the perspective of data architecture, we hope to make a historical precipitation of some changes in state data through pipelining data as far as possible. Let's turn it into historical data for the time being.
For example, update the status data, the balance is 200
Account_id, balance,effective_date, expire_date, status
100 100 20171004010100 20181104010200 1
Can be transformed into:
Account_id, balance,effective_date, expire_date, status
100 100 20171004010100 20171104010200 0-- > update statement
100 200 20171104010200 20181104010200 1-- > insert statement
So it is obvious that one update has been transformed into two statements. From the point of view of the data life cycle, there is indeed a certain degree of guarantee, which is also a design method that we need and developers to emphasize.
Then let's take a look at the processing plans and ideas for this kind of historical data.
Generally speaking, from a design point of view, we hope to deal with the changes of historical data as much as possible, that is, to interpret the changes of this data from the program level, which can be packaged in a transaction or split into asynchronous ways according to the requirements. Of course, this way seems to be a very natural way, in fact, it is also a relatively ideal way, from my deliberate drawing, it is strongly applied.
If, on the other hand, for applications, the generation of historical data is transparent to them, that is, they do not need to pay attention to this logic, then the logic will sink to the database level, so the part of HIST in my diagram will be enlarged. If this logic is handled at the database level, a natural way is stored procedures, of course, there should be a series of logical processing. For example, if a type of business needs the generation of these historical data, and other similar businesses also have this way of thinking, then there needs to be a more general way. In fact, from the database level, this is a heavy system-level implementation, because if the database level is bound with this logic, then it is a difficult problem to expand.
There is another way, which may be a compromise, that is, the program may sink to the data processing layer, the database processing layer does not have to deliberately relate the meaning of the data, the data layer can write and transfer the data, and you can package the transaction to generate historical data through the program layer or transparently generate historical data through OLTP data, but the key point is that historical data and OLTP data are put together. Of course, the data in this table will be enlarged, so we need to do a data archive that deviates from the line, such as keeping the data for nearly 7 days. On the other hand, historical data may be retained for months or even years, so that historical data can be distributed, and the actual meaning and cost may need to be balanced.
The above is the content of this article on "Scheme Analysis of Server data precipitation". I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more about the relevant knowledge, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.