In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Introduction: TPC-C is the most credible transaction processing testing standard in the database field, and the final measurement criteria are mainly two: one is performance (tpmC), and the other is performance-to-price ratio (price/tpmC). Performance indicates how fast the database can run, and the performance-to-price ratio indicates how low the cost of the database can be.
May 20 news, TPC official website announced that the distributed relational database OceanBase, independently developed by Ant Financial Services Group, broke its own world record in the TPC-C benchmark test, the transaction processing performance increased from 60.88 million tpmC last test to 700 million tpmC, and the performance-to-price ratio was optimized to ¥3.98/tpmC from ¥6.25/tpmC last time.
According to the report, in this test, OceanBase uses the ECS cloud server provided by Alibaba Public Cloud, and the database server has increased from 20764-core ECS i2 servers to 1557 84-core ECS i2d servers. Both tests used the Oracle compatibility mode of OceanBase version 2.2.
Less than a year later, the second brush TPC-C, more or less surprising. Even more surprising is the test results, quite brutal, tpmC directly increased 10 times, the price of a single tpmC fell by 36.3%. It greatly raises the threshold for newcomers to cross the threshold.
To get back to the point, let's start with the conclusion:
1. OceanBase has become the only database with more than a thousand nodes and passed the TPC-C test.
2. With a performance of 700 million tpmC and a cost-effective ratio of 3.98 tpmC, TPM OceanBase has created an achievement that even Oracle is difficult to surpass.
What does it mean that a thousand-node cluster passes through TPC-C?
Perhaps, some would say that few companies will have the need for 1500 + node transaction databases, which is just a gimmick. The author believes that this problem should focus on the future and look ahead of the business.
Performance, such as grain, has never been enough in the history of databases. Therefore, performance tuning has naturally become one of the hottest topics in the database field.
The era of IOT is coming, and the scale of data we need to deal with in the future is unimaginable now, just as we could not imagine the trading volume of Singles' Day today a decade ago. But one thing is certain that higher database performance and capacity can fully liberate the imagination of the business.
As the saying goes, there are many people and great power, but for relational databases, it is difficult to achieve linear growth in performance by simply adding nodes, especially when the number of nodes increases to a certain extent. Because of this limitation, we often see that the core database needs to be split because the existing cluster performance can not be satisfied and needs to be hosted by multiple clusters.
That's why we rarely hear of trading databases with more than 100 nodes, let alone 1500 + nodes.
The characteristics of distributed database, so that OceanBase can achieve linear expansion, and built-in transparent partitioning, so that large-scale relational database cluster can be realized.
This test shows the real horizontal scalability of OceanBase and proves that the processing power and capacity of OceanBase database will not become the shackles of enterprise business development.
It's hard to surpass the brutal new record.
Performance 700m tpmC, cost-effective 3.98/tpmC, this brutal record, greatly raised the threshold that newcomers surpassed.
Take Oracle as an example. Judging from the existing version of Oracle, it may be very difficult to surpass this result.
This judgment is mainly based on two reasons: computing power and IO ability.
First look at computing power: Oracle RAC can only reach dozens, because more than 32 nodes, RAC can hardly do OLTP processing, but OLAP. Even if the processing capacity of a single unit is large enough, dozens of units can top 1500 sets, the network communication capacity of a single station is not enough. While OceanBase uses 1500 10 Gigabit networks, if it corresponds to 30, each one needs 500 gigabytes of network bandwidth, which is very difficult.
Look at the IO capability: when Oracle got 30.25 million tpmC that year, it used 97 sets of storage, of which more than 2 tpmC 3 used flash-based memory cards. If you want to achieve a performance of 700 million or more than 20 times, you will get about 2000 sets of storage with similar performance. Even if the IO capacity of a single storage has been improved, the network bandwidth of a single storage also has a bottleneck.
Write at the end
Obviously, the two tests, OceanBase is planned, for the first time, the overall performance of the performance is not much different from the traditional commercial database, it is obviously just practice. For the second time, it is the real demonstration of OceanBase's distributed capabilities, that is, horizontal scalability.
Of course, OceanBase still has a long way to go to become an excellent general-purpose database, and TPC-C is a good starting point.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.