Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the architecture design and operation mechanism in Spark Streaming?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "what is the architecture design and operation mechanism in Spark Streaming". In the daily operation, I believe that many people have doubts about the architecture design and operation mechanism in Spark Streaming. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "what is the architecture design and operation mechanism in Spark Streaming". Next, please follow the editor to study!

The template of RDD is that of DStream,RDD DAG. DStreamGraph,Spark Streaming adds a time dimension on the basis of RDD. A timer will be started on the Driver side, and the interval BatchDuration will generate Job. On the Executor side, a timer will be started. The interval 200ms will put the received data into the BlockManager and report the metadata information to the ReceiverTracker on the driver side. The whole program engine is running all the time.

There is a timer object in the JobGenerator class that sends GenerateJobs messages to generate Job at intervals of BatchDuration.

There is a blockIntervalTimer object in the BlockGenerator class, which calls the updateCurrentBuffer method every other 200ms, gives the received data to BlockManager for storage, and reports metadata information to ReceiverTracker.

The loop method in the RecurringTimer class is an endless loop that executes all the time, calling back incoming methods at regular intervals.

In addition, the default parallelism is hereditary, and the number of partition of the parent RDD will be passed to the child RDD. When there is less data in each partition in the RDD, in order to improve efficiency, you can first call the coalesce method to merge to the specified number of partition. There is an empty RDD in the Spark Streaming, that is, if there is no data in the RDD, the generation of the Job,Job will be triggered at a fixed time, regardless of whether there is data in the RDD, in order to make the whole framework work normally.

At this point, the study on "what is the architecture design and operating mechanism in Spark Streaming" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report