In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "large table sub-database sub-table summary". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
1. Preface
Why do we need to do sub-database and sub-table? I'm sure you all know something about this.
The storage and access of massive data has become the bottleneck of the MySQL database. The growing business data undoubtedly causes a considerable load on the MySQL database, and puts forward high requirements for the stability and expansibility of the system.
And the resources of a single server (CPU, disk, memory, etc.) are always limited, the final database can carry the amount of data, data processing capacity will encounter bottlenecks.
At present, there are generally two options.
1) one is to replace storage without using MySQL, for example, distributed storage such as HBase, polarDB, TiDB can be used.
2) if you want to continue to use MySQL for various reasons, you will generally adopt the second way, that is, sub-database and sub-table.
At the beginning of the article, it is said that there are many articles about sub-database and sub-table on the Internet, and there are many explanations on knowledge points. therefore, this paper will no longer elaborate too much on the paradigm treatment of the scheme of sub-library and sub-table.
Instead, it focuses on combing the complete process of sub-library and sub-table from architecture design to release and launch, and summarizes the precautions and best practices. It consists of five parts:
Business reconfiguration
Storage architecture design
Transformation and launch
Stability guarantee
Project management
In particular, the best practices at all stages are lessons learned from the cohesion of blood and tears.
two。 Phase I: business refactoring (optional)
For the behavior of sub-database and sub-table with reasonable division of micro-services, generally, you only need to pay attention to the change of storage architecture, or only need to carry out business transformation on individual applications, and generally do not need to focus on the stage of "business reconfiguration". This stage is "optional".
The first difficulty of this project is business restructuring.
The split project involves two large tables An and B, and nearly 80 million of the data in a single table is left over from the era of single application. There is no good domain-driven / MSA architecture design from the beginning, and the logic divergence is very serious. Up to now, 50 + online services and 20 + offline businesses are involved in direct reading and writing.
Therefore, how to ensure the thoroughness and comprehensiveness of business transformation is the top priority, and there can be no omissions.
In addition, Table An and Table B each have 20 or 30 fields, and there is an one-to-one correspondence between the primary keys of the two tables. Therefore, in this sub-database and sub-table project, the two tables need to be reconstructed and merged to eliminate redundant / useless fields.
2.1 query Statistics
The online business is queried through the distributed link tracking system, according to the table name as the query condition, and then aggregated according to the service dimension, find all the related services, and write a document to record the relevant teams and services.
Note here that many tables are used not only by online applications, but also by many offline algorithms and data analysis businesses. You need to sort them out together to do a good job in offline cross-team communication and research, so as not to affect normal data analysis after switching.
2.2 query split and Migration
Create a jar package and work with the service owner to migrate all relevant queries in the service to this jar package (the jar package for this project is called projected).
This is the 1.0.0-SNAPSHOT version.
Then change all the xxxMapper.xxxMethod () in the original service to projectdb.xxxMethod () to make the call.
This has two benefits:
It is convenient to do follow-up query split analysis.
It is convenient to directly replace the query in the jar package with the rpc call of the modified console service. The business side only needs to upgrade the version of the jar package to quickly change the sql call to the rpc query.
This step took several months of practice, be sure to sort out the various services to do a comprehensive migration, can not be omitted, otherwise it may lead to split analysis is not comprehensive, omitting the relevant fields.
The migration of the query is mainly due to the fact that there are too many services involved in this split project, so it needs to be folded into a jar package to facilitate later transformation. If only one or two services are involved in the actual sub-library sub-table project, this step can be avoided.
2.3 split analysis of federated query
According to the query in the 2.2 folded jar package, the query is classified and judged according to the actual situation, and some historical problems and abandoned fields are sorted out.
Here are some thinking points.
1) which queries cannot be split? For example, paging (as much as possible, it can only be in the form of redundant columns)
2) which queries can be split by join in business?
3) which tables / fields can be merged?
4) which fields need to be redundant?
5) which fields can be discarded directly?
6) identify the key table keys according to the specific business scenarios and the overall statistics of sql. The rest of the query goes through the search platform.
After thinking, we get a general idea and scheme of query transformation.
At the same time, in this project, we need to merge the two tables into one table, discard redundant fields and invalid fields.
2.4 New watch design
This step is based on the 2.3 split analysis of the query, obtains the results of the old table fusion, redundancy, and discarded fields, and designs the fields of the new table.
After producing the design structure of the new table, it must be sent to all relevant business parties for review, and ensure that all business parties pass the design of the table. An offline review can be performed if necessary.
If some fields are discarded during the process of the new table, all business parties must be notified for confirmation.
For the design of the new table, in addition to combing the fields, we also need to redesign and optimize the index according to the specific query.
2.5 first upgrade
After the design of the new table is completed, first do a transformation of the sql query in the jar package, and update all the old fields to the fields of the new table.
This is the 2.0.0-SNAPSHOT version.
Then let all services upgrade the jar package version to ensure that these obsolete fields are really not used, and the new table structure fields can completely cover the past business scenarios.
In particular, due to the large number of services involved, services can be distinguished according to non-core and core, and then launched in batches to avoid serious failure or large-scale rollback caused by problems.
2.6 Best practices
2.6.1 try not to change the field names of the original table
When doing the new table merging, the tables in tables An and B were simply merged at first, so many fields with the same names were renamed.
Later, during the field simplification process, many duplicate fields were deleted, but the renamed fields were not changed back.
In the process of later launch, it is inevitable that the business side needs to reconstruct the field name.
Therefore, when designing a new table, do not change the field name of the original table unless you have to!
2.6.2 the index of the new table needs to be carefully considered
The index of the new table can not simply copy the old table, but needs to be redesigned after splitting and analysis according to the query.
In particular, after the fusion of some fields, it may be possible to merge some indexes, or design some higher-performance indexes.
2.6 Summary of this chapter
At this point, the first stage of sub-library and sub-table has come to an end. The time required for this stage depends entirely on the specific business. If it is a business with a heavy historical burden, it may take several months or even half a year to complete.
The completion quality of this phase is very important, otherwise it may lead to the need to rebuild the table structure and rebuild the full data at the later stage of the project.
Here, once again, for services with reasonable division of micro-services, the behavior of sub-library and sub-table generally only needs to pay attention to the change of storage architecture, or only needs to carry out business transformation on individual applications. In general, there is no need to focus on the stage of "business reconfiguration".
3. Phase 2: storage architecture design (core)
For any sub-library and sub-table project, the design of the storage architecture is the core part!
3.1 overall architecture
According to the query results sorted out in the first stage, we summarize such query rules.
More than 80% of the queries are queried through or with three dimensions: field pk1, field pk2 and field pk3. There is an one-to-one correspondence between pk1 and pk2 due to historical reasons.
20% of the queries are strange, including fuzzy queries, other field queries, and so on.
Therefore, we have designed the following overall architecture, introducing database middleware, data synchronization tools, search engine (Ali Cloud opensearch/ES) and so on.
The following discussion revolves around this framework.
3.1.1 mysql sub-table storage
The dimensions of the Mysql subtable are determined based on the results of the query split analysis.
We found that pk1\ pk2\ pk3 can cover more than 80% of the major queries. Let these queries go directly to the mysql database according to the sub-table keys.
In principle, it is generally possible to maintain at most the full data of a sub-table, because too much full data will cause waste of storage, extra overhead of data synchronization, more instability, not easy to expand and so on.
However, due to the high real-time requirements of the query statements of pk1 and pk3 in this project, the two full data of pk1 and pk3 as subtable keys are maintained.
However, pk2 and pk1 have an one-to-one correspondence due to historical reasons, so we can keep only one mapping table and store only two fields, pk1 and pk2.
3.1.2 search platform index storage
Search the platform index, which can cover the remaining 20% of piecemeal queries.
These queries are often not based on the subtable key, or with fuzzy query requirements.
For the search platform, it generally does not store all the data (especially some large varchar fields), but only stores the primary key and the index fields needed by the query. After getting the search results, go to the mysql storage to get the required records according to the primary key.
Of course, judging from the results of later practice, some tradeoffs need to be made here:
1) some non-index fields, if not very large, can be redundant, similar to overwriting the index to avoid another sql query.
2) if the table structure is relatively simple and the fields are small, you can even consider full storage to improve query performance and reduce the pressure on mysql database.
Here is a special hint that there must be a delay in synchronization between the search engine and the database. Therefore, for the statements queried according to the sub-table id, try to ensure that the database is queried directly, which will not bring the hidden trouble of consistency.
3.1.3 data synchronization
In general, the new table and the old table can be processed directly by data synchronization or double writing, and the two methods have their own advantages and disadvantages.
Generally choose a way according to the specific situation.
For the specific synchronization relationship of this project, see the overall storage architecture, which consists of four parts:
1) synchronization of the old table to the new master table
At the beginning, in order to reduce code intrusion and facilitate expansion, the way of data synchronization was adopted. And because there is too much business, it is worried that there are uncounted services that have not been modified in time, so data synchronization can avoid data loss caused by these situations.
However, in the process of launching, it is found that when the delay exists, many newly written records cannot be read, which has a serious impact on specific business scenarios. (for specific reasons, refer to the description of 4.5.1)
Therefore, in order to meet the real-time requirements of the application, we re-transform it into the form of double writing in the 3.0.0-SNAPSHOT version on the basis of data synchronization.
2) synchronization of new table from full master table to full secondary table
3) New table full master table to mapping table to synchronization
4) synchronization of the new table from the master table to the search engine data source
2), 3), 4) are all data synchronization from the new table to other data sources, because there is no strong real-time requirement, therefore, in order to facilitate expansion, all adopt the way of data synchronization and do not carry out more write operations.
3.2 capacity assessment
Before applying for mysql storage and search platform index resources, you need to conduct a capacity assessment, including storage capacity and performance metrics.
The specific online traffic assessment can check the qps through the monitoring system, and the storage capacity can be simply regarded as the sum of the storage capacity of each table online.
However, in the process of full synchronization, we find that the actual capacity required will be greater than estimated, as described in 3.4.6.
The specific performance stress testing process will not be described in detail.
3.3 data check
As can be seen from the above, there are a large number of business transformations in this project, which belong to heterogeneous migration.
From some past sub-library sub-table projects, most of them are isomorphic / peer-to-peer split, so there will not be a lot of complex logic, so the verification of data migration is often ignored.
In the case of full peer-to-peer migration, there are generally fewer problems.
However, for heterogeneous migrations with more transformations like this, verification is definitely a top priority!
Therefore, the results of data synchronization must be checked to ensure that the transformation of business logic is correct and the consistency of data synchronization is correct. This is very important.
In this project, there are a large number of business logic optimization and field changes, so we have done a separate verification service to verify the full volume and increment of the data.
In the process, a lot of data synchronization and business logic inconsistencies were found in advance, which provided the most important prerequisite for the smooth launch of this project!
3.4 Best practices
3.4.1 Traffic magnification caused by sub-database and sub-table
When doing a capacity assessment, you need to pay attention to an important issue. It is the magnification of query traffic brought by sub-table.
There are two reasons for this increase in traffic:
The second query of the index table. For example, if you query according to pk2, you need to query pk1 through pk2 first, and then return the results according to pk1 query.
Batch query of in. If a select...in... According to the split table key, the database middleware will split the query into the corresponding physical sub-table, which is equivalent to the original query and enlarged into multiple queries. (of course, the database will take the id in the same subtable as a batch query, which is an unstable merge.)
Therefore, we need to pay attention to:
At the business level, limit the number of in queries as much as possible to avoid excessive magnification of traffic.
When evaluating the capacity, we need to consider this part of the magnification factor and make appropriate redundancy. In addition, it will be mentioned that the business transformation will be carried out in batches to ensure that the capacity can be expanded in time.
There is a reasonable estimate of whether it is divided into 64, 128 or 256 tables. The more you tear down, the more you will theoretically enlarge, so don't unnecessarily divide too many tables, and make an appropriate estimate according to the scale of the business.
For the query of the mapping table, because there is obvious hot and cold data, we add a layer of cache in the middle to reduce the pressure on the database.
3.4.2 change scheme for subtable keys
In this project, there is a business situation that will change the field pk3, but pk3, as a sub-table key, cannot be modified in database middleware. Therefore, the update logic of pk3 can only be modified in the middle platform by deleting and then adding.
It is important to note that the transaction atomicity of delete and add operations. Of course, simple processing can also be used as a log to alarm and calibrate.
3.4.3 consistency of data synchronization
As we all know, a key point in data synchronization is the ordering of (message) data. If the order of the accepted data and the generated data is not guaranteed to be strictly consistent, it is possible to bring data coverage because of data disorder. Eventually lead to inconsistency.
The underlying message queue used by our self-developed data synchronization tool is kakfa,kafka for message storage, which can only achieve local ordering (specifically, the order of each partition). We can route messages from the same primary key to the same partition, so consistency is generally guaranteed. However, if there is an one-to-many relationship, there is no guarantee that each line changes in order, as shown in the following example.
Then you need to check the data source to get the latest data to ensure consistency.
However, counter-investigation is not a "silver bullet", and two issues need to be considered.
1) if the message change comes from the read-write instance, and the reverse check database is to look up the read-only instance, then there will be data inconsistency caused by the delay in reading and writing the instance. Therefore, you need to ensure that the source of the message change and the instance of the reverse check database are the same.
2) reverse checking will bring additional performance overhead to the database, and the full-time impact needs to be carefully evaluated.
3.4.4 Real-time problem of data
The delay mainly needs to pay attention to several aspects and evaluate and measure according to the actual situation of the business.
1) second delay of data synchronization platform
2) if both the message subscription and the reverse check database fall on the read-only instance, then in addition to the second delay of the above data synchronization platform, there will also be a master-slave synchronization delay of the database.
3) second delay from wide meter to search platform
Only the solution that can meet the business scenario is the appropriate solution.
3.4.5 Optimization of storage capacity after sub-table
Because in the process of data synchronization, for a single table, it is not inserted strictly according to increment, so it will produce a lot of "storage holes", so that the total amount of storage after synchronization is much larger than the estimated capacity.
Therefore, when applying for a new library, the storage capacity is applied for 50% more.
For specific reasons, you can refer to my article why the total storage size has become larger after MySQL sub-database and sub-table?
3.5 Summary of this chapter
At this point, the second phase of the sub-library and sub-table has come to an end.
There are a lot of holes in this stage.
On the one hand, it is to design a highly available and scalable storage architecture. During the progress of the project, many modifications and discussions have been made, including the number of mysql data redundancy, index design of search platform, traffic magnification, table key modification and so on.
On the other hand, "data synchronization" itself is a very complex operation, such as real-time, consistency, one-to-many and other issues mentioned in the best practices of this chapter, which need to be paid great attention to.
Therefore, it is more dependent on data verification to verify the correctness of the final business logic and data synchronization!
After completing this stage, you can officially enter the phase of business switching. It should be noted that data validation will still play a key role in the next phase.
4. The third stage: transformation and launch (prudent)
After the first two phases are completed, start the business switching process. The main steps are as follows:
1) the Zhongtai service adopts the mode of single reading and double writing.
2) data synchronization is turned on from the old table to the new one
3) the projectDB version on which all services are upgraded, and the RPC is launched. If there is a problem, the downgrade version can be rolled back (after the launch is successful, you can read the new library only and write both the new and old libraries).
4) check and monitor to ensure that there are no services other than Zhongtai service to access the old database and old tables.
5) stop data synchronization
6) delete the old table
4.1 query modification
How to verify whether our design of the first two stages is reasonable? Whether the modification of the query can be completely overwritten is a prerequisite.
When the new table is designed, you can modify the old query using the new table as the standard.
Take this project as an example, the old sql needs to be modified in the new mid-Taiwan service.
1) the modification of read query
The inquiry may involve the following aspects:
A) according to the query conditions, the inner join of pk1 and pk2 needs to be changed to the new table name of the corresponding subtable key
B) disposal of discarded fields in part of sql
C) change the non-table key query to the search platform query, and pay attention to ensuring semantic consistency
D) pay attention to writing a single test to avoid low-level errors, mainly at the DAO level.
Only when the new table structure and storage architecture can fully adapt to the query transformation, can we think that there is no problem with the previous design for the time being.
Of course, there is a prerequisite here, that is, all the relevant queries have been collected and have not been left out.
2) the transformation of writing query
In addition to the changes to the related fields, more importantly, it needs to be transformed into the double-write mode of the old table and the new table.
The specific business writing logic may be involved here, and this project is particularly complex. It is necessary to fully communicate with the business side in the transformation process to ensure that the writing logic is correct.
A configuration switch can be added to each of the double writes to facilitate switching. If there is a problem with the writing of the new library in the double write, you can close it quickly.
At the same time, data synchronization from the old library to the new library is not turned off during double writing.
Why? It is mainly due to the particularity of our project. Since we are involved in dozens of services, we must go online in batches in order to reduce the risk. Therefore, there is a troublesome intermediate state, part of the service is the old logic, part of the service is the new logic, the data correctness of the intermediate state must be guaranteed, see the analysis of 4.5.1 for details.
4.2 Service-oriented transformation
Why do you need to create a new service to host the modified query?
On the one hand, it is to transform the convenient upgrade and rollback switching, on the other hand, it is to close the query and provide the corresponding query capability as a platform-based service.
Put the modified new query in the service, and then replace all the original queries in the jar package with the client calls to the service.
Also, upgrade the jar package version to 3.0.0-SNAPSHOT.
4.3 Service launched in batches
In order to reduce risk, it is necessary to arrange batch launch from non-core services to core services.
Note that during the batch launch process, since the writing service is often the core service, it is arranged at the back. There may be a non-core reading service online, when there will be an intermediate state of reading new tables and writing old tables.
1) all related services use refactoring branch to upgrade projectdb version to 3.0.0-SNAPSHOT and deploy intranet environment
2) Business service depends on Zhongtai service, and subscription service is required.
3) Open the refactoring branch (do not merge with the normal iterative branch), deploy the private network, and the private network is expected to be tested for more than two weeks
The purpose of using a new refactoring branch is to test the intranet for two weeks without affecting the normal iteration of the business. The business branch that is updated weekly can be merge to the reconstructed branch to deploy the private network, and then the external network can be deployed to the master using the business branch merge.
Of course, from the point of view of the consistency of online and offline code branches, you can also test online refactoring branches and business branches together, which will put greater pressure on development and testing.
4) in the process of batch launch, if you encounter the problem of dependency conflict, you need to solve it and update it to the document in time.
5) before launching the service, you must require business development or testing, clearly evaluate the specific api and risk points, and make a good regression.
Here again, after the completion of online, please do not miss the offline data analysis business! Please do not miss the offline data analysis business! Please do not miss the offline data analysis business!
4.4 the offline process of the old table
1) check and monitor to ensure that there are no services other than Zhongtai service to access the old database and old tables.
2) check the sql audit on the database to make sure that no other service still reads the old table data
3) stop data synchronization
4) delete the old table
4.5 Best practices
4.5.1 you may not be able to read it immediately after writing it.
In the process of going online in batches, we encountered the situation that we may not be able to read it immediately after writing. Due to the large number of businesses, we have adopted the method of launching in batches to reduce risks. Some applications have been upgraded and some applications have not been upgraded. The unupgraded service still writes data to the old table, while the upgraded application will read the data from the new table. When the delay exists, many newly written records cannot be read, which has a serious impact on specific business scenarios.
There are two main reasons for the delay:
1) the write service has not been upgraded, double write has not started, and the old table is still written. At this time, there will be an intermediate state of reading the new table and writing the old table, and there will be synchronization delay between the new table and the old table.
2) in order to avoid the pressure of the master database, the new table data obtains changes from the old table, and then checks the data of the old table read-only instance for synchronization, and there is a certain delay in the master-slave database itself.
There are generally two solutions:
1) data synchronization is changed to double write logic.
2) make compensation in the read interface. If the new table cannot be found, check the old table again.
4.5.2 Database middleware unique ID replacement self-increasing primary key (highlight, knock on the blackboard)
Because after dividing the table, continue to use the self-increasing primary key of a single table, which will lead to global primary key conflicts. Therefore, you need to use a distributed unique ID instead of a self-increasing primary key. There are many kinds of algorithms on the Internet, and this project uses the database self-increasing sequence generation method.
The distributed ID generator of database self-increasing sequence is dependent on the existence of Mysql. Its basic principle is to store a value in Mysql. Every time a machine goes to get ID, it will accumulate a certain amount, such as 2000, on the current ID, and then add 2000 to the current value and return it to the server. In this way, each machine can continue to repeat this operation to get a unique id interval.
But is it done with only a globally unique ID? Obviously not, because there will also be id conflicts between old and new tables.
Because there are many services, we need to go online in batches in order to reduce the risk. Therefore, there is logic for part of the service to write only to the old table and part of the service to double write.
In such a state, the id policy of the old table uses auto_increment. If there is only one-way data flow (from the old table to the new table), you only need to reserve an interval for the id of the old table, and sequence can avoid conflicts starting with a larger starting value.
However, in this project, there is also double writing of new table data and old table data. If the above scheme is adopted, the larger id is written to the old table, and the auto_increment of the old table will be reset to this value, so the record of the incremental id generated by the service of writing only the old table will inevitably conflict.
Therefore, the interval segments of both sides are exchanged here, the old database starts from the larger auto_increment starting value, and the id selected by the new table (that is, the range of sequence) increases progressively from the id larger than the maximum record of the old table, which is smaller than the starting value to be set by the old table auto_increment, which well avoids the problem of id conflict.
1) before switching:
The initial id of the sequence is set to the current self-increasing id size of the old table, and then the self-increasing id of the old table needs to be enlarged, leaving a certain interval for the self-increasing id of the old table to continue to use, so as to prevent id conflicts after the data written by the unupgraded business is synchronized to the new database.
2) after switching
Data synchronization can be disconnected without any modification.
3) advantages
Use only one code.
Switching can be done using a switch without upgrading.
In case the autoincrement of the old table is enlarged by abnormal data, it will not cause any problem.
4) shortcomings
If the old table fails and the new table is written successfully, log assistance is required.
4.6 Summary of this chapter
After the completion of the offline of the old table, the transformation of the whole sub-library and sub-table will be completed.
In this process, you need to always be in awe of online business, think carefully about every possible problem, and come up with a quick rollback solution (projectdb's jar package version iteration is mentioned in three stages, from 1.0.0-SNAPSHOT to 3.0.0-SNAPSHOT, including different changes in each stage. Rollback through jar package version plays a great role in the process of batch launch in different stages.) Avoid causing major failures.
5. Stability guarantee
This chapter mainly re-emphasizes the guarantee means of stability. As one of the important goals of this project, stability actually runs through the whole project cycle. Basically, it has been mentioned in every link above. Each link should attract enough attention, carefully design and evaluate the plan, and have a clear idea in mind. Instead of relying on heaven for food:
1) the design of the new watch must fully communicate with the business side and ensure review.
2) for "data synchronization", there must be data verification to ensure the correctness of the data. There are many reasons that may lead to incorrect data, including real-time and consistency. Ensuring that the data is correct is a major prerequisite for going online.
3) for every stage of change, a quick rollback plan must be made.
4) the launch process is in the form of batch launch, starting with non-core businesses to avoid fault expansion.
5) Monitoring alarms should be fully configured, receive alarms in time when there are problems, and respond quickly. Do not ignore, it is very important, there have been several data problems, are found and solved in a timely manner through alarm. 6) single test, business function test, etc.
6. Cross-team collaboration in project management
With regard to "cross-team collaboration", this article is specifically mentioned as a chapter.
Because in such a cross-team large-scale project transformation process, scientific teamwork is an indispensable factor to ensure that the overall project is completed on time and high quality.
Next, I would like to share some experiences and experiences.
6.1 all documents go first
Teamwork avoids "empty words without evidence" most.
Whether it's a team division of labor, scheduling, or anything that requires multi-person collaboration, there needs to be a documentation record that can be used to track progress and control the process.
6.2 Business communication and confirmation
All the transformation of the table structure must communicate with the relevant business parties and sort out the possible historical logic comprehensively.
All field modifications determined by the discussion must be confirmed by the Owner of each service.
6.3 responsibility in place
For multi-team and multi-person cooperation projects, each team should identify a docking person, and the project manager should communicate with the only docking person of the team to clarify the complete progress and completion quality of the team.
This is the end of the content of "large table sub-database sub-table summary". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.