In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to use Flink TableAPI and SQL / Elasticsearch". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to use Flink TableAPI and SQL / Elasticsearch".
Use Tbale&SQL and Flink Elasticsearch Connector connectors to write data to the index of the Elasticsearch engine
Sample environment
Java.version: 1.8.xflink.version: 1.11.1elasticsearch:6.x
Sample data source (project code cloud download)
Building Development Environment and data of Flink system example
Sample module (pom.xml)
TableAPI & SQL and sample Module of Flink system
InsertToEs.java
Package com.flink.examples.elasticsearch;import org.apache.flink.streaming.api.TimeCharacteristic;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;import org.apache.flink.table.api.EnvironmentSettings;import org.apache.flink.table.api.StatementSet;import org.apache.flink.table.api.TableResult;import org.apache.flink.table.api.bridge.java.StreamTableEnvironment / * * @ Description uses Tbale&SQL and Flink Elasticsearch connectors to write data to the index * / public class InsertToEs {/ * Apache Flink of the Elasticsearch engine. There are two relational API for unified stream batch processing: Table API and SQL. * refer to the official: https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/elasticsearch.html * / / see attribute configuration class: ElasticsearchValidator static String table_sql = "CREATE TABLE my_users (\ n" + "user_id STRING,\ n" + "user_name STRING,\ n" + "uv BIGINT" \ n "+" pv BIGINT,\ n "+" PRIMARY KEY (user_id) NOT ENFORCED\ n "+" WITH (\ n "+" 'connector.type' =' elasticsearch',\ n "+" 'connector.version' =' 6',\ n "+" 'connector.property-version' =' 1' \ n "+" 'connector.hosts' =' http://192.168.110.35:9200',\n" + "'connector.index' =' users',\ n" + "'connector.document-type' =' doc',\ n" + "'format.type' =' json' \ n "+" 'update-mode'='append'-- append | upsert\ n "+") " Public static void main (String [] args) {/ / build StreamExecutionEnvironment StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment (); / / default stream time mode env.setStreamTimeCharacteristic (TimeCharacteristic.ProcessingTime); / / build EnvironmentSettings and specify BlinkPlanner EnvironmentSettings bsSettings = EnvironmentSettings.newInstance (). UseBlinkPlanner (). InStreamingMode (). Build (); / build StreamTableEnvironment StreamTableEnvironment tEnv = StreamTableEnvironment.create (env, bsSettings) / / register the kafka data dimension table tEnv.executeSql (table_sql); / / Elasticsearch connector only supports sink, not source. Cannot SELECT elasticsearch table, so data can only be submitted through insert; String sql = "insert into my_users (user_id,user_name,uv,pv) values"; / / TableResult tableResult = tEnv.executeSql (sql); / / the second way: declare a set of operations to execute sql StatementSet stmtSet = tEnv.createStatementSet (); stmtSet.addInsertSql (sql) TableResult tableResult = stmtSet.execute (); tableResult.print ();}}
Print the result
+-+ | default_catalog.default_database.my_users | +-+ | -1 | +-- + 1 row in set Thank you for your reading The above is the content of "how to use Flink TableAPI and SQL / Elasticsearch". After the study of this article, I believe you have a deeper understanding of how to use Flink TableAPI and SQL / Elasticsearch, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.