In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
What is the principle and basic function of flink? I believe that many inexperienced people have no idea about it. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
Bounded and unbounded bounded data sets
For a dataset with a time boundary, the data must start and end at a certain time in the process.
Unbounded data set
The data has no boundaries at the beginning.
Flink interface structure FLink SQL
SQL API supports batch and stream calculations.
Flink Table
Table API adds Schema information to the DataStream and DataSet data sets in memory, abstracts the data types into a table structure, and then processes the corresponding data sets through the Table API interface.
SQL API can directly query the data in the Table API registry.
Tabel API provides a large number of interfaces such as GroupByKey, Join, etc., and supports the conversion between DataStream and DataSet.
DataStream
For those with programming experience, version 1.12 started DataStream to process stream batch data, and the DataSet interface was gradually abolished.
Stateful Stream Processing API
The lowest level interface of Flink is very flexible, it can realize complex flow computing logic, and the cost is high. General enterprise users use it for secondary development or deep encapsulation.
Program structure DataStream API
Set the running environment to support dynamic (local and online) startup, local startup and remote online startup.
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment (); / / specify processing methods batch processing, streaming, adaptive env.setRuntimeMode (RuntimeExecutionMode.AUTOMATIC)
Specify data source
DataSet text = env.readTextFile ("/ path/to/file")
Operational data set
DataSet counts = text.flatMap (_ .toLowerCase.split (")) / / implements map conversion. Filter (_ .nonEmpty) / / filter control .map ((_, 1)) / / performs map conversion, converted to key-value .keyby (0) / / partitioned according to the first field specified, .sum (1); / / partitioned and summed by the second field
Specify result output
Counts.writeAsCsv (outputPath, "\ n", "")
Specify a name and trigger
Env.execute ("Streaming Count"); data type
Flink is supported by a wealth of data types and is uniformly defined by TypeInformation. Support any data type conversion with java or scala.
Native data type
The BasicTypeInfo type supports any java base type and String type.
Java Tuples
TupleTypeInfo describes Tuple type data, and the 2019 version supports 25 fields.
Scala Case class
CaseClassTypeInfo supports any Scala Case class, including Scala tuples, with a maximum of 22 fields in 2019.
POJOs Typ
PojoTypeInfo describes any POJO type
The POJO class must be a public-decorated non-inner class.
The POJO class must contain a default empty constructor
All fields of the PoJO class must be public and the type must be supported by flink.
Flink Value Typ
The Value data type implements org.apache.flink.types.Value, which contains read () and write () serialization and deserialization operations. Currently, the built-in types include IntValue, DoubleValue and StringValue.
Special data type
List in scala, Map, Either, Option, Writable in Either,hadoop in Try,java.
As a new generation of computing engine, Flink plays an important role in the field of real-time computing. At the same time, it also has this complex concept, time window, water level and so on.
After reading the above, have you mastered the principles and basic functions of flink? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.