Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the new features of Kafka2.5.0

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what are the new functions of Kafka2.5.0". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "what are the new functions of Kafka2.5.0"?

1. New functions 1. Kafka Streams: Add Cogroup in the DSL

When multiple streams come together to form a single larger object (for example, a shopping site may have shopping traffic, wish single flow, and purchase flow. Together they form a customer) and it is very difficult to use them in Kafka Streams DSL.

Typically, you need to group and aggregate all the flows into a KTables, then make multiple outer join calls, and end up with a KTable with the desired objects. This creates a state store for each stream and a long list of ValueJoiners, and each new record must go through this connection to reach the final object.

Creating a Cogroup method that uses a single state store will:

Reduce the number of fetches from the state store. For multiple joins, there will be a chain reaction when the new value enters any stream, and the join processor will continue to call ValueGetters until we have accessed all the state stores.

The performance has been slightly improved. As mentioned above, all ValueGetters are called, which also causes all ValueJoiners to be called, forcing the current join values of all other streams to be recalculated, thus affecting performance.

2 、 Add support for TLS 1.3

Java 11 adds support for TLS 1.3. After adding support for Java 11, we should provide support for this.

3. Scala 2.11 is no longer supported

Why no longer support it?

We are currently building Kafka:2.11, 2.12, and recently released 2.13 for three versions of Scala. Since we have to compile and run tests on each supported version, this is not a small cost from a development and testing perspective.

Scala 2.11.0 was released in April 2014, and support for 2.11.x ends in November 2017 (it will be more than 2 years by the time Kafka 2.5 is released). Scala 2.12.0 was released in November 2016 and Scala 2.13.0 in June 2019. Based on this, it's time to abandon support for Scala 2.11 so that we can make the test matrix easy to manage (the most recent kafka-trunk-jdk8, which took nearly 10 hours, will build and run unit and integration tests using three versions of Scala. In addition, Scala 2.12 and later have improved interoperability with Java 8 feature interfaces (first introduced in Scala 2.12). More specifically, lambda in Scala 2.12 can be used with the Java 8 functional interface in the same way as Java 8 code.

On our download page, we recommend using the Kafka binaries built by Kafka 2.12 since Scala 2.1.0. We switched to Scala 2.12 as the default Scala version of the source tarball, build and system testing in Kafka 2.2.0.

II. Improvement and repair

Kafka Streams lag is not 0 when entering a topic transaction

Kafka-streams Configurable Internal topics message.timestamp.type=CreateTime

Add KStream#toTable to Streams DSL

Add the Commit/List Offsets option to AdminClient

Add VoidSerde to Serdes

Improved Sensor Retrieval

[KAFKA-3061] fixed Guava dependency problem

[KAFKA-4203] the default maximum message size of the Java producer is no longer the same as the broker default

[KAFKA-5868] kafka consumers' reblance time is too long

Click here for detailed updates

III. Upgrade to 2.5.0 guidelines for other versions

If you are upgrading from a version prior to 2.1.x, refer to the following notes for changes to the schema used to store offsets. After you change inter.broker.protocol.version to the latest version, you cannot downgrade to a version prior to 2.1.

Update server.properties on all Broker and add the following attributes. CURRENT_KAFKA_VERSION refers to the version you are upgrading. CURRENT_MESSAGE_FORMAT_VERSION refers to the version of the message format currently in use. If the message format version was previously overwritten, its current value should be retained. Or, if you are upgrading from a version prior to 0.11.0.x, you should set CURRENT_MESSAGE_FORMAT_VERSION to match CURRENT_KAFKA_VERSION.

Inter.broker.protocol.version = CURRENT_KAFKA_VERSION (for example, 0.10.0,0.11.0,1.0,2.0,2.2).

Log.message.format.version = CURRENT_MESSAGE_FORMAT_VERSION

If you are upgrading from 0.11.0.x or later and the message format has not been overwritten, you only need to overwrite the inter-Broker protocol version.

Inter.broker.protocol.version = CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1, 2.1, 2.1, 2.2, 2.3).

Upgrade one Broker at a time: close Broker, update the code, and then restart. When this is done, Broker will run the latest version, and you can verify that the cluster behaves and performs as expected. If you have any problems, you can still downgrade at this time.

After verifying the behavior and performance of the cluster, improve the protocol version by editing the inter.broker.protocol.version and setting it to 2.5.

Restart Broker one at a time for the new protocol version to take effect. After Broker starts using the latest version of the protocol, it will no longer be possible to downgrade the cluster to an older version.

If you have overwritten the message format version as described above, you need to scroll and restart again to upgrade it to the latest version. Once all (or most) users have upgraded to version 0.11.0 or later, change log.message.format.version to 2.5 on each Broker and restart them one by one. Note that older Scala clients that are no longer maintained do not support the message format introduced in 0.11, so a newer Java client must be used to avoid conversion costs.

2.5.0 Major changes, possible upgrade impact

When RebalanceProtocol#COOPERATIVE is used, Consumer#poll can still return data, and in addition, Consumer#commitSync can now throw RebalanceInProgressException to notify the user of such events, CommitFailedException and allow the user to complete the Reblance in progress, and then retry to commit offsets for those partitions that still own.

To improve resilience in a typical network environment, the default value zookeeper.session.timeout.ms has been increased from 6s to 18s and replica.lag.time.max.ms from 10s to 30s.

Cogroup () adds a new DSL operator to aggregate multiple streams at a time.

A new KStream.toTable () API has been added to convert the input event stream to KTable.

A new Serde type Void has been added to represent null keys or null values in the input topic.

Discard UsePreviousTimeOnInvalidTimestamp and replace it with UsePartitionTimeOnInvalidTimeStamp.

A precise semantics is improved by adding a pending offset protection mechanism and a more powerful transaction commit consistency check, which greatly simplifies the implementation of a scalable and precise application.

Discard KafkaStreams.store (String, QueryableStoreType) and replace it with KafkaStreams.store (StoreQueryParameters).

Scala 2.11 is no longer supported.

All Scala class kafka.security.auth in the package has been deprecated. Note that kafka.security.auth.Authorizer and kafka.security.auth.SimpleAclAuthorizer have been deprecated in 2.4.0.

By default, TLSv1 and TLSv1.1 are disabled because they have known security vulnerabilities. Now only TLSv1.2 is enabled by default. You can continue to use TLSv1 and TLSv1.1 ssl.enabled.protocols by explicitly enabling them in the configuration options ssl.protocol and.

ZooKeeper has been upgraded to 3.5.7, and if there are no snapshot files in the 3.4 data directory, the ZooKeeper upgrade from 3.4.X to 3.5.7 may fail. This usually occurs during a test upgrade, where ZooKeeper 3.5.7 attempts to load an existing 3.4 data directory that did not create a snapshot file.

ZooKeeper version 3.5.7 supports TLS encrypted connections to ZooKeeper with or without client certificates, and other Kafka configurations can be used to take advantage of this feature.

At this point, I believe you have a deeper understanding of "what are the new features of Kafka2.5.0?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report