In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "Why Flink can not write MySQL in real time". In daily operation, I believe that many people have doubts about why Flink can not write MySQL in real time. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubt that "Why Flink can't write MySQL in real time"! Next, please follow the editor to study!
This paper is mainly divided into the following three parts:
Problem description
Solution idea
Cause analysis
Problem description
Flink 1.10 uses flink-jdbc connector to interact with MySQL, reading and writing data can be completed, but when writing data, it is found that the inserted data can be queried in MySQL only after the execution of the Flink program is completed. That is, although it is a stream calculation, it cannot output the calculation results in real time?
Related code snippet:
JDBCAppendTableSink.builder () .setDrivername ("com.mysql.jdbc.Driver") .setDBUrl ("jdbc:mysql://localhost/flink") .setUsername ("root") .setPassword ("123456") .setParameterTypes (BasicTypeInfo.INT_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO) .setQuery ("insert into batch_size values (?)") .build ()
How to solve?
Flink 1.10 this problem is to know a second, do not know the work of Case, in the beginner is very easy to encounter, so it is really Flink can not write MySQL in real time? Of course not, the problem is solved by simply adding a line to the above code:
.. setBatchSize (1) / / the buffer size that will be written to MySQL is 1. ..
Cause analysis
So although the problem has been solved, what is the root cause? As you may see here, the problem is obvious, that is, when Flink designed JDBC Sink for performance reasons, it set the default value for writing buffer.
Flink 1.10 in the JDBCOutputFormat base class AbstractJDBCOutputFormat and this related variable DEFAULT_FLUSH_MAX_SIZE default value is 5000, so you learn to test due to less data (less than 5000), the data has been in the buffer, until the end of the data source data, the end of the job, the calculation results will not be brushed into MySQL, so there is no real-time (each) written to MySQL. As follows:
But there is another factor to note here, and that is the time factor. The default value of DEFAULT_FLUSH_INTERVAL_MILLS above is 0, which means that there is no time limit, and the write action cannot be triggered until the buffer is full or the job ends.
That is, some beginners find problems, even if they deliberately hit the breakpoint when debug, do not let the homework finish, but when the flowers are gone, the data is not written to the MySQL.
In Flink 1.10, AbstractJDBCOutputFormat has two implementation classes:
They correspond to the following two types of Sink:
So in Flink 1.10, both AppendTableSink and UpsertTableSink will have the same problem. However, when UpsertTableSink, the user can set the time, and AppendTableSink does not even have an entry for the time setting.
At this point, the study on "Why Flink can not write MySQL in real time" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.