In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "what are the new functions of Flink1.8.0". In daily operation, I believe many people have doubts about the new functions of Flink1.8.0. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "what are the new features of Flink1.8.0?" Next, please follow the editor to study!
Flink1.8.0 release, the main changes are as follows:
1. The old State will be cleared incrementally
two。 Support for hadoop has changed
3. TableEnvironment deprecation in programming
4.Flink1.8 will not publish binaries with Hadoop
More details are as follows:
This release discusses important aspects of changes between Flink 1.7 and Flink 1.8, such as configuration, features, or dependencies.
Status
1. Use TTL (time to Live) to clean up the old Key status continuously and incrementally.
We introduced TTL (time to Live) for the Key state in Flink 1.6 (FLINK-9510). This feature allows cleanup on access and makes Key status entries inaccessible. In addition, the state is now cleaned up when you write a SavePoint / checkpoint. Flink 1.8 introduces continuous cleaning of the number of old bars in the RocksDB state backend (FLINK-10471) and heap state backend (FLINK-10473). This means that the old number of entries will be constantly cleaned up (according to the TTL setting).
2. New support for pattern migration when restoring SavePoint
With Flink 1.7.0, we added support for changing the state mode when using AvroSerializer (FLINK-10605). With Flink1.8.0, we have made great strides in migrating all built-in TypeSerializers to the new serializer snapshot abstraction, which theoretically allows schema migration. In the serializer included with Flink, we now support schema migration formats for PojoSerializer (FLINK-11485) and Java EnumSerializer (FLINK-11334) and, in limited cases, Kryo (FLINK-11323).
3. SavePoint compatibility
TraversableSerializer updates in this serializer (FLINK-11539), SavePoint in Flink 1.2 containing Scala will no longer be compatible with Flink 1.8. This limitation can be resolved by upgrading to versions between Flink 1.3 and Flink 1.7, and then updating to Flink 1.8.
4. RocksDB version conflict and switch to FRocksDB (FLINK-10471)
You need to switch to a custom build of RocksDB named FRocksDB because some changes in RocksDB are required to support continuous state cleanup using TTL. The used version of FRocksDB is based on the upgraded version 5.17.2 of RocksDB. For Mac OS X, only RocksDB version 5.17.2 with OS X version > = 10.13 is supported.
Maven dependence
1. Use Flink to bundle changes to Hadoop libraries (FLINK-11266)
Convenient binaries containing hadoop are no longer published.
If your deployment depends on flink-shaded-hadoop2 containing flink-dist, you must manually download and package Hadoop jar from the optional components section of the download page and copy it to the / lib directory. Alternatively, you can build a Flink distribution that contains hadoop by packaging flink-dist and activating the include-hadoopmaven configuration file.
Since hadoop flink-dist is no longer included by default, specifying when-DwithoutHadoop packages flink-dist no longer affects the build.
Configuration
1. TaskManager configuration (FLINK-11716)
TaskManagers is now bound to the host IP address instead of the host name by default. You can control this behavior as taskmanager.network.bind-policy through configuration options. If your Flink cluster encounters inexplicable connectivity problems after upgrading, try setting taskmanager.network.bind-policy: name's 1.8 setting behavior before flink-conf.yaml returns.
Table API
1. Unprediction (FLINK-11447) used by the direct table constructor
Flink 1.8 disapproves of Table's direct use of this class's constructor in Table API. This constructor was previously used to perform joins with horizontal tables. You should now use table.joinLateral () or table.leftOuterJoinLateral () instead. This change is necessary to convert Table classes into interfaces, which will make Table API easier to maintain and cleaner in the future.
2. Introduce a new CSV format character (FLINK-9964)
This version introduces new formatters for RFC4180-compliant CSV files. The new descriptor can be used as an org.apache.flink.table.descriptors.Csv. Currently, this can only be used with Kafka. The old descriptor can be used for file system connectors with org.apache.flink.table.descriptors.OldCsv.
3. Deprecation of static generator method on TableEnvironment (FLINK-11445)
In order to separate API from the actual implementation, TableEnvironment.getTableEnvironment () does not recommend using static methods. You should use Batch/StreamTableEnvironment.create () now.
4. Changes in the table API Maven module (FLINK-11064)
Users who previously had flink-table dependencies need to update their dependency flink-table-planner and the correct dependency flink-table-api-*, depending on whether they are using Java or Scala: flink-table-api-java-bridge or flink-table-api-scala-bridge.
5. Change to external catalog table builder (FLINK-11522)
ExternalCatalogTable.builder () does not approve of using ExternalCatalogTableBuilder ().
6. Change to the naming of the table API connector jar (FLINK-11026)
The naming scheme for Kafka/elasticsearch7 sql-jars has changed. In maven terminology, they no longer have sql-jar qualifiers, while artifactId is now prefixed as an example, flink-sql rather than flink such as flink-sql-connector-kafka.
7. Change to the way Null is specified (FLINK-11785)
Now Null in Table API needs to define nullof (type) instead of Null (type). The old method has been abandoned.
Connector
1. Introduce a new KafkaDeserializationSchema (FLINK-8354) that can access ConsumerRecord directly.
For FlinkKafkaConsumers, we have introduced a new KafkaDeserializationSchema that provides direct access to KafkaConsumerRecord. This includes the KeyedSerializationSchema feature, which has been deprecated but is still available today.
2. FlinkKafkaConsumer will now filter restored partitions (FLINK-10342) according to the theme specification.
Starting with Flink 1.8.0, FlinkKafkaConsumer now always filters out restored partitions that are no longer associated with specified topics to be subscribed to during the restore. This behavior did not exist in previous versions of FlinkKafkaConsumer. If you want to keep your previous behavior. Use the disableFilterRestoredPartitionsWithSubscribedTopics () configuration method FlinkKafkaConsumer above.
Consider this example: if you have a Kafka Consumer A that is consuming topic, you make a SavePoint, then change your Kafka consumers instead of consuming B from topic, and then restart your work from the SavePoint. Prior to this change, your consumers will now use these two themes APermine B because it is stored in state A where consumers are using topic consumption. With this change, your consumer will only use topic after the restore, because we use the configured topic to filter the topic stored in the state.
Other interface changes:
1. The canEqual () method is deleted from the TypeSerializer interface (FLINK-9803).
These canEqual () methods are typically used for proper equality checking across type hierarchies. This attribute is not actually required in TypeSerializer, so the method has now been removed.
2. Delete the CompositeSerializerSnapshot utility class (FLINK-11073)
The CompositeSerializerSnapshot utility class has been deleted. Now CompositeTypeSerializerSnapshot, you should use a snapshot of the composite serializer, which delegates serialization to multiple nested serializers.
At this point, the study of "what are the new features of Flink1.8.0" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.