Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement a query planner in SQLite

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article shows you how to implement a query planner in SQLite, which is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

1.0 introduction

The task of the query planner is to find the best algorithm or "query plan" to complete a SQL statement. Back in SQLite 3.8.0, components of query planner were rewritten to run faster and generate better query plans. This rewriting is called "next-generation query planner" or "NGQP".

This article re-summarizes the importance of query planning, raises some of the problems inherent in query planning, and outlines how NGQP solves these problems.

What we know is that NGQP (the next generation of query planners) is almost always better than older versions of query planners. However, some applications may have unwittingly relied on some uncertain or not-so-good features in the old version of query planner, so upgrading the query planner to NGQP may cause the program to flicker. NGQP must consider this risk and provide a series of checks to reduce the risk and resolve problems that may arise.

Follow this document on NGOP. For a more general sqlite query planner and an overview that covers the entire history of sqlite, see "Overview of the sqlite query optimizer."

2.0 background

There is usually an obvious best algorithm choice for queries against a single table with a few simple indices. But for larger and more complex queries, such as multiple connections between many exponents and subqueries, there may be hundreds, thousands or millions of reasonable algorithms for computing results. The job of such a query plan is to choose a single "best" query plan with many possibilities.

Query planner is what makes the SQL database engine so amazingly useful and powerful. This is really all the sql database engine, not just sqlite.) The query planner frees programmers from choosing a specific query plan from the drudgery. This allows programmers to focus on more psychological energy in higher-level application problems and providing more value to end-users. For simple queries, the choice of query plan is obvious, although convenient, but not very important. But as applications, schemas and queries will become more and more complex. A smart query planning can greatly speed up and simplify application development. It tells the database engine what content requirements are amazingly powerful, and then lets the database engine figure out the best way to retrieve that content.

Writing a good query planner is more art than science. The query planner must have incomplete information. It cannot decide how long any particular plan will be taken without actually having to run it. Therefore, when comparing two or more plans to find out which are the "best", the query planner makes assumptions and guesses that sometimes go wrong. A good query plan requires finding the right solution, which programmers rarely consider.

Query planner in 2.1 sqlite

Sqlite's calculation uses nested loop joins, each of which is concatenated in a loop. (additional loops may insert IN and OR operators in a WHERE sentence. Sqlite thinks that those considerations are too much, but for simplicity, we can ignore it in this article.) In each loop, one or more indices are used and the search is accelerated, or a loop may be a "full table scan" to read each row in the table. Therefore, query planning is broken down into two subtasks:

The nesting order of the various cycles picked.

Select the good index for each cycle.

Picking nesting sequence is generally a more challenging problem.

Once the nesting order of the connection is established, the choice of each cycle index is usually obvious.

2.2 SQLite query planner stability guarantee

For any given SQL statement, SQLite usually chooses the same query plan if:

There is no significant change in the schema of the database, such as adding or removing indexes (indices)

The ANALYZE command did not return

SQLite compiles without using SQLITE_ENABLE_STAT3 or SQLITE_ENABLE_STAT4, and

Use the same version of SQLite

SQLite's stability guarantee means that you run queries efficiently during testing, and your application does not change schema, so SQLite will not suddenly choose to start using a different query plan, which may cause performance problems after you release your application to users. If your application works in the lab, it will also work after deployment.

Enterprise-class client / server SQL databases usually cannot make such a guarantee. In the client / server SQL database engine, the server tracks the size of statistical tables and the number of indexes (indices), and the query planner selects the optimal plan based on these statistics. Once the content of the database is changed by adding, deleting and modifying, the change of statistical information may cause the query planner to use different query planning for some specific queries. Usually the new plan is better for the changed data structure. But sometimes new query planning can lead to performance degradation. When using the client / server database engine, there is usually a database administrator (DBA) to deal with these rare problems. But DBA cannot fix this problem in embedded databases like SQLite, so SQLite needs to be careful to ensure that query planning is not accidentally changed after deployment.

SQLite stability guarantee is suitable for traditional query planning and NGQP.

It is important to note that a change in the version of SQLite may lead to a change in query planning. The same version of SQLite usually chooses the same query plan, but if you reconnect your application to a different version of SQLite, the query plan may change. In rare cases, a change in the SQLite version can cause performance degradation. This is one reason why you should consider statically connecting your application to SQLite instead of using a system-wide (system-wide) SQLite shared library, because it can change without your knowledge or control.

3.0 A thorny situation

"TPC-H Q8" is a test query from Transaction Processing Performance Council. Query planner did not choose a good plan for TPC-H Q8 in 3.7.17 and earlier versions of SQLite. And it is determined that no matter how much you adjust the traditional query planner, you can't fix this problem. In order to find a good solution for TPC-H Q8 query and continuously improve the quality of SQLite query planner, it is necessary to redesign the query planner. This section will explain why redesign is necessary, how NGQP is different and how to solve the TPC-H Q8 problem.

3.1 query details

TPC-H Q8 is an 8-way join. Based on what you have seen above, the main task of the query planner is to determine the best nesting order for these eight loops, thereby minimizing the amount of work required to complete the join operation. The following figure is a simple model of the TPC-H Q8 example:

In this diagram, the eight tables in the from clause section of the query statement are represented as a large circle and identified by the name of the from clause: N2, S, L, P, O, C, N1, and R. The arc in the figure represents the estimated cost corresponding to the outer join of the table that calculates the starting point of the arc. For example, the cost of connecting L within S is 2.30 and the cost of connecting L outside S is 9.17.

The "resource consumption" here is calculated by logarithmic operation. Because loops are nested, the total resource consumption is multiplied, not added. It is generally believed that the map belt is the weight to be added, but the graph here shows the logarithmic value of various resource consumption. The figure above shows that S inside L consumes about 6.87 less, and queries with S loops inside L loops run about 963 times faster than queries with S loops outside L loops.

The arrows that start with the small circle marked "*" represent the resources consumed to run each loop separately. The outer loop must consume the resources consumed by "*". The inner loop can choose to consume the resources consumed by "*", or choose one of the rest to be consumed by the external loop, whichever is chosen for the lowest resource consumption. You can think of the resources consumed by "*" as a shorthand representation of multiple arcs from any one of the other nodes in the diagram to the current node. Therefore, such a graph is "complete", that is, each pair of nodes in the graph has arcs in two directions (some are very obvious, some are implicit).

The problem of finding the best query planning is equivalent to finding the minimum consumption path in the graph to visit each node only once.

Note: the estimate of resource consumption in the TPC-H Q8 chart is calculated by the query planner in SQLite 3.7.16 and converted using natural logarithms. )

3.2 complexity

The query planner problem shown above is a simplified version. The consumption of resources can be estimated. We can't know the real resource consumption of running the loop until we actually run the loop. SQLite estimates the resource consumption of running the loop based on the constraints of the WHERE clause and the indexes that can be used. Such estimates are usually close, but sometimes the results are out of touch with reality. Using the ANALYZE command to collect additional statistics for the database can sometimes make SQLite's assessment of the resources consumed more accurate.

The resources consumed are made up of multiple numbers, not just a single number, as shown above. SQLite calculates several different estimates of consumed resources for different stages of each cycle. For example, the "initialization" resource cost occurs only when the query is started. The resources consumed by initialization are those consumed by automatically indexing tables that do not have an index. This is followed by the resources consumed by running each step of the loop. Finally, the number of rows automatically generated by the loop is evaluated, which is the necessary information to evaluate the resources consumed by the inner loop. If the query contains an ORDER BY clause, the resources consumed by sorting should also be considered.

The dependencies in commonly used queries are not necessarily on a single loop, so the dependent model may not be represented in a graph. For example, one of the constraints of the WHERE clause may be S.a=L.b+P.c, which implies that the S loop must be the inner loop of L and P. Such dependencies cannot be represented by a graph because there is no way to draw an arc from two or more nodes at the same time.

If the query contains an ORDER BY clause or a GROUP BY clause, or if the query uses the DISTINCT keyword, the rows are automatically sorted to form a graph, and it is particularly convenient to choose the path to traverse the graph, so there is no need to sort separately. Automatically deleting the ORDER BY clause can make a big difference in performance, so it's also a factor to consider to complete the full implementation of the planner.

In TPC-H Q8 queries, all initialization resource consumption is negligible, each node has previous dependencies, and there are no ORDER BY,GROUP BY or DISTINCT clauses. Therefore, for TPC-H Q8, the above figure is sufficient to represent what is needed for computing resource consumption. Common queries may involve many other complex situations, and in order to explain the problem clearly, the subsequent parts of this article ignore many factors that complicate the problem.

3.3 find the best query planning

Prior to version 3.8.0, SQLite used "nearest neighbor" or "NN" heuristics to find the best query planning. The NN heuristic traverses the graph separately, and always chooses the arc with the lowest consumption as the next step. For the most part, NN heuristics work very well. Also, NN heuristics are fast, so SQLite can quickly find good planning even when it reaches 64 connections. In contrast, other database engines that can run more searches stop when the number of tables in the same connection is greater than 10 or 15:00.

Unfortunately, the query planning calculated by NN heuristics for TPC-H Q8 is not optimal. The planning calculated by NN heuristic is R-N1-N2-S-C-O-L-P, and its resource consumption is 36.92. The R table runs in the outermost loop, N1 is the next inner loop, N2 is in the third loop, and so on to P, which is in the innermost loop. The shortest path (available from exhaustive search) to traverse this graph is P-L-O-C-N1-R-S-N2, where the resource cost is 27.38. The difference may not seem like much, but keep in mind that the resources consumed are calculated logarithmically, so the shortest path is almost 750x faster than the path obtained by NN heuristics.

One solution to this problem is to change SQLite so that it uses exhaustive search to get the best path. However, the time required for exhaustive search is proportional to K! (K is the number of tables involved in the connection), so when there are more than 10 connections, it takes a lot of time to run sqlite3_prepare ().

3.4 N nearest neighbor heuristics or "N3" heuristics

The next generation of query planners uses a new heuristic to find the best path to traverse the graph: "N nearest neighbor heuristics" (later called "N3"). With N3, each step is not just a nearest neighbor, the N3 algorithm tracks N best paths at each step, where N is a small integer.

For TPC-H Q8 diagrams, the first step is to find the four shortest paths that can access any single node.

R (cost: 3.56)

N1 (cost: 5.52)

N2 (cost: 5.52)

P (cost: 7.71)

The second step is to find the four shortest paths that can access two nodes at the beginning of one of the four shortest paths found in the previous step. In this case, two or more paths are OK (such paths have the same set of accessible nodes, possibly in a different order), as long as you keep in mind that you must maintain the path of the first step and the path of minimum resource consumption. We found the following path:

R-N1 (cost: 7.03)

R-N2 (cost: 9.08)

N2-N1 (cost: 11.04)

Rmurp {cost: 11.27}

The third step starts with the four shortest paths that can access two nodes, and finds four shortest paths that can access three nodes:

R-N1-N2 (cost: 12.55)

R-N1-C (cost: 13.43)

R-N1-P (cost: 14.74)

R-N2-S (cost: 15.08)

and so on. The TPC-H Q8 query has eight nodes, so the process is repeated a total of eight times. In the case of K connections, the storage complexity is O (N) and the computing time complexity is O (Known), which is much faster than O (2K) scheme.

However, which value does N choose? You can try NumberK, where the complexity of this algorithm is O (K2), which is actually still very, very fast, because the maximum value of K is 64, and K rarely exceeds 10. However, this is still not enough to solve the TPC-H Q8 problem. If the TPC-H Q8 query is in progress, the query planning obtained by the N3 algorithm is R-N1-C-O-L-S-N2-P, and the resource consumption is 29.78. This is a great improvement on the NN algorithm, but it is still not the best one. When Niss10 or greater, N3 can find the best query plan for TPC-H Q8 queries.

The initial implementation of the next generation query planning chooses Number1 for simple queries, Number5 for two joins, and Nation10 for three or more table joins. Subsequent releases may have to change N worthy selection rules.

4.0 risk of upgrading to the next generation query planner

For most applications, upgrading from an old query planner to a next-generation query planner does not require much thought or effort. Simply replace the old SQLite version with a newer SQLite version, and then recompile it, and the application will run much faster. There is no need to change or fix the API of a complex process.

However, like other querier replacements, upgrading to the next generation of query planners can cause a small risk of performance degradation. This problem does not occur because the next-generation query planner is incorrect, or has many vulnerabilities, or is worse than the old query planner. If the information of the selected index is determined, the next generation of query planners can always choose a better or better plan than before. The problem is that some applications may use low-quality, less selective indexes, and cannot run ANALYZE. The old query planner looked at many but fewer possible query implementations for each query, so with luck, such planners may encounter good planning. On the other hand, the next generation query planner looks at more query planning implementations, so in theory, it may choose another query planning with better performance. If the index runs well at this time, but the performance degrades in the actual operation, then it may be caused by the distribution of data.

Technical points:

As long as the next generation query planner can access accurate ANALYZE data in SQLITE STAT1 files, it can always find the same or better query planning as the previous query planner.

As long as the query pattern does not contain indexes with the same value in the leftmost field and more than 10 or 20 rows, the next generation of query planners can always find a good query plan.

Not all applications meet the above conditions. Fortunately, even if these conditions are not met, the next generation of query planners can usually find good query planning. However, performance degradation is also possible (but rarely).

4.1 example analysis: upgrading Fossil to use the next generation query gauge

Fossil DVCS is a version control system that tracks all SQLite source code. Fossil software repository is a SQLite database file. (as a separate exercise, we invite readers to think deeply about this recursive application query planner. Fossil is both the version control system of SQLite and the test platform of SQLite. Whenever SQLite enhancements are made, Fossil is one of the first applications to test and evaluate these enhancements. So Fossil was the first to adopt the next generation query planner.

Unfortunately, the next generation of query planners causes Fossil performance degradation.

Fossil uses many reports, one of which is a schedule of changes made on a single branch, which shows all the merges and deletions of that branch. Look at http://www.sqlite.org/src/timeline?nd&n=200&r=trunk and you can see a typical example of a time report. It usually takes only a few milliseconds to generate such a report. However, after upgrading to the next-generation query planner, we found that it took nearly 10 seconds to generate such a report for the trunk branch of the warehouse.

The core query used to generate the branch schedule is as follows. We do not expect readers to understand the details of this query. Instructions are given below. )

SELECT blob.rid AS blobRid, uuid AS uuid, datetime (event.mtime,'localtime') AS timestamp, coalesce (ecomment, comment) AS comment, coalesce (euser, user) AS user, blob.rid IN leaf AS leaf, bgcolor AS bgColor, event.type AS eventType, (SELECT group_concat (substr (tagname,5),',') FROM tag, tagxref WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid AND tagxref.rid=blob.rid AND tagxref.tagtype > 0) AS tags Tagid AS tagid, brief AS brief, event.mtime AS mtime FROM event CROSS JOIN blob WHERE blob.rid=event.objid AND (EXISTS (SELECT 1 FROM tagxref WHERE tagid=11 AND tagtype > 0 AND rid=blob.rid) OR EXISTS (SELECT 1 FROM plink JOIN tagxref ON rid=cid WHERE tagid=11 AND tagtype > 0 AND pid=blob.rid) OR EXISTS (SELECT 1 FROM plink JOIN tagxref ON rid=pid WHERE tagid=11 AND tagtype > 0 AND cid=blob.rid) ORDER BY event.mtime DESC LIMIT 200

This query is not particularly complex, but even so, it can still replace hundreds, maybe thousands of lines of processing code. The main point of this query is to scan down the EVENT table for the latest 200 submission records that meet any of the following three criteria:

This submission contains the tag "trunk".

This submission has a subsubmission with the tag "trunk".

This submission has a parent submission that contains the tag "trunk".

The first condition displays commits on all trunk branches, and the second and third conditions contain commits that are merged into or generated by the trunk branch. These three conditions are achieved by concatenating three EXISTS statements with OR in the WHERE clause of this query. The performance degradation caused by using the next-generation query planner is caused by the second and third conditions. The problems in the two conditions are the same, so let's only look at the second condition. The subquery for the second condition can be rewritten as the following statement (simplifying the minor and unimportant ones):

SELECT 1 FROM plink JOIN tagxref ON tagxref.rid=plink.cid WHERE tagxref.tagid=$trunk AND plink.pid=$ckid

The PLINK table holds the parent-son relationship between individual submissions. The TAGXREF table maps tags to submissions. For reference, the relevant parts of the schema for querying the two tables are shown as follows:

CREATE TABLE plink (pid INTEGER REFERENCES blob, cid INTEGER REFERENCES blob); CREATE UNIQUE INDEX plink_i1 ON plink (pid,cid); CREATE TABLE tagxref (tagid INTEGER REFERENCES tag, mtime TIMESTAMP, rid INTEGER REFERENCE blob, UNIQUE (rid, tagid)); CREATE INDEX tagxref_i1 ON tagxref (tagid, mtime)

There are only two ways to implement such a query. (there may be many other algorithms, of course, but none of them is a competitor to the "best" algorithm.

Find all subcommits that submit $ckid, and test each one to see if any subsubmissions contain the $trunk tag

Find all the submissions that contain the $trunk tag, and then test each such submission to see if there are any child commits of the $ckid submission.

Intuitively, we humans think that the first algorithm is the best choice. Each submission may have several sub-commits (one of which is the one we use most often. Then test each subsubmission and use a logarithmic operation to calculate the time it takes to find the $trunk tag In fact, algorithm 1 is indeed faster. However, the next generation of query planners do not use the intuitive best choice. The next generation of query planners must have chosen a rare algorithm, and algorithm 2 is a little more difficult mathematically. This is because the next generation of query planners must assume that PLINK_I1 and TAGXREF_I1 indexes are of the same quality and selectivity in the absence of other information. Algorithm 2 uses one field of the TAGXREF_I1 index and two fields of the PLINK_I1 index, while algorithm 1 uses only the first field of the two indexes. It is precisely because algorithm 2 uses multiple field indexes that the next generation of query planners will correctly determine it as the better performance of the two algorithms according to their own standards. The time taken by the two algorithms is very similar, and algorithm 2 is only slightly ahead of algorithm 1. In this case, however, it is true that algorithm 2 is correct.

Unfortunately, algorithm 2 is slower than algorithm 1 in practical applications.

This problem occurs because the index is not of the same quality. It is possible that a submission has only one child submission. In this way, the first field of the PLINK_I1 index is usually subtracted to search for a row. But since thousands of submissions contain a "trunk" tag, the first field of TAGXREF_I1 won't do much to reduce the search.

The next generation of query planners has no way of knowing that TAGXREF_I1 is of little use in such queries unless ANALYZE is run on the database. The ANALYZE command collects quality statistics for each index and stores these statistics in the SQLITE_STAT1 table. If the next generation of query planners can access these statistics, it will be very easy for it to choose algorithm 1 as the best algorithm.

Didn't the old query planner choose algorithm 2? It's simple: because the NN algorithm never even takes algorithm 2 into account. The diagram of this type of planning problem is as follows:

In the case of "not running ANALYZE" as shown on the left, the NN algorithm chooses the loop P9PLINK) as the outer loop, because 4.9 is smaller than 5.2. the result is to choose the Pmurt path, that is, algorithm 1. The NN algorithm only looks for the best choice path at each step, so it completely ignores the fact that 5.2-4.4 is a slightly better plan than 4.9-4.8. However, the N3 algorithm tracks five best paths against two connections, so it finally chooses the TmurP path because it consumes less resources overall. The path Tmurp is algorithm 2.

Note: if you run ANALYZE, the assessment of resource consumption is closer to reality, so that both NN and N3 choose algorithm 1.

Note: the assessment of resource consumption in the latest two figures is calculated by the next generation of query planners using a 2-based logarithmic algorithm, and the assumed resource consumption is slightly different from that of the old query planner. Therefore, the resource consumption assessment in the last two diagrams cannot be compared with the resource consumption assessment in TPC-H Q8. )

4.2 problem correction

Running ANALYZE on the resource repository database immediately fixes such performance problems. However, regardless of whether the repository is analyzed or not, we require Fossil to be strong and always run quickly. For this reason, we modify the query to use the CROSS JOIN operator instead of the commonly used JOIN operator. SQLite will not reorder the tables connected by CROSS JOIN. This feature has been available for a long time in SQLite, and it is specially designed to allow experienced developers to force SQLite to perform a specific order of nested loops. Once a connection is changed to a connection like CROSS JOIN (with a keyword added), the next-generation query planner forces a slightly faster algorithm 1 regardless of whether or not ANALYZE is used to collect statistics.

We say that algorithm 1 is "faster", but strictly speaking, this is not accurate. It is faster to run algorithm 1 on a common repository, but it is possible to build a resource repository where each commit to the repository is committed to a unique branch with a different name, and all commits are subcommits of the root commit. In this case, TAGXREF_I1 has more options than PLINK_I1, and algorithm 2 is really faster. In practice, however, such resource repositories are highly unlikely, so hard-coding the order of nested loops using CROSS JOIN syntax is a suitable solution to the problem in this situation.

5.0 list of ways to avoid or correct query planner problems

Don't panic! It is actually very rare for query planners to choose poor planning. You may not encounter such a problem in the application. If you don't have a performance problem, you don't have to worry about it.

Create the correct index. Most SQL performance problems are not caused by query planner problems, but by the lack of appropriate indexes. Ensure that the index facilitates all large queries. Most performance issues can be solved with one or two CREATE INDEX commands without modification to the application code.

Avoid creating low-quality indexes. A low-quality index (created to solve query planner problems) is an index in which the leftmost field in a table has more than 10 or 20 rows with the same value. In particular, avoid using a Boolean field or an enumerated type field as the leftmost field of the index.

The Fossil performance problem mentioned in the previous paragraph of this article is due to the fact that the leftmost subsegment (the TAGID field) of the TAGXREF_I1 index of the TAGXREF table has the same value item of more than 10, 000.

If you must use a low-quality index, be sure to run ANALYZE. As long as the query planner knows that the index is of low quality, then the low-quality index will not confuse it. The query planner knows that the method of low-quality indexes is implemented through the contents of the SQLITE_STAT1 table, which is calculated by the ANALYZE command.

Of course, ANALYZE can only run efficiently if the database has a very large amount of content in the first place. When you want to create a database and accumulate a lot of data, you can run the command "ANALYZE sqlite_master" to create the SQLITE_STAT1 table, and then (using the commonly used INSERT statement) fill in the SQLITE_STAT1 table to indicate that such a database is suitable for your application-maybe this is what you get after running the ANALYZE command against some perfectly populated template database in the lab.

Write your own code. Adding allows you to quickly and easily know which queries take a lot of time, so that you can only run queries that don't take too long.

If the query may use a low-quality index on a database that does not run the analysis, use CROSS JOIN syntax to force a specific nested loop order. SQLite has special treatment for the CROSS JOIN operator, which forces the left table to be the outer loop of the right table.

If there are other ways to implement it, avoid doing so because it contradicts the powerful advantages of any SQL language concept, especially if the application developer does not need to know about query planning. If you use CROSS JOIN, you should do the same until later in the development cycle, and carefully explain how CROSS JOIN is used in the comments so that it is possible to remove it later. Avoid using CROSS JOIN early in the development cycle because doing so is an immature optimization measure known as the root of all evil.

Remove some restrictions on the WHERE clause using the unary operator "+". If the query planner still insists on choosing poor-quality indexes when higher-quality indexes are available for a particular query, be careful to use the unary operator "+" in the WHERE clause, which forces the query planner not to use poor-quality indexes. If possible, try to add this operator carefully, and especially avoid using it early in the application development cycle. In particular, it is important to note that adding the unary operator "+" to an equal sign expression that is closely related to type may change the result of this expression.

Use the INDEXED BY syntax to force the query in question to select a specific index. As with the first two headings, avoid using this method if possible, especially in the early stages of development, because it is clear that it is an immature optimization measure.

6.0 conclusion

SQLite's query planner does a very good job of choosing a fast algorithm for running SQL statements. This is true for the old query planner, especially for the new next-generation query planner. It may happen occasionally that the query planner chooses a worse query plan because the information is incomplete. Using the next-generation query planner is less common than using the old query planner, but it is still possible. Even if this rarely happens, what the application developer needs to do is to understand and help the query planner do the right thing. In general, the next generation of query planners is just a new enhancement to SQLite that allows applications to run faster without requiring developers to do more thinking or action.

The above is how to implement a query planner in SQLite. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report