In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains the "introduction to Storm programming knowledge points", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn "what are the introduction to Storm programming knowledge points" bar!
What is Storm? What is streaming computing?
Storm is a distributed real-time computing framework, which is suitable for streaming computing. The so-called flow calculation, you can think of it as your electricity meter, electricity flow through the meter, the meter to calculate the consumption, the meter consumption calculation is a typical flow calculation.
Here are several concepts of Storm that you need to use in the programming process:
Topology
Topology, similar to MapReduce Job in Hadoop, is an object used to orchestrate and accommodate a group of computing logic components (Spout, Bolt) (a Job in Hadoop MapReduce contains a set of Map Task, Reduce Task). This group of computing components can be arranged in the way of DAG diagram (by selecting Stream Groupings to control the flow of data flow distribution), thus combined into a more powerful computing logic object, that is Topology. A Topology cannot be stopped after it is running, and it will run indefinitely unless it is terminated by manual intervention (explicit execution of bin/storm kill) or unexpected failure (such as downtime, failure of the entire Storm cluster).
Spout
Spout is a source of Topology messages, a component that can continuously produce messages, for example, it can be an Socket Server listening for external Client connections and sending messages, it can be a consumer of a message queue (MQ), a service used to receive messages sent by Flume Agent's Sink, and so on. The messages produced by Spout are abstracted as Tuple in Storm, and the Tuple messages built according to the needs are connected among multiple computing components of the whole Topology, thus forming a flow.
Bolt
The message processing logic in Storm is encapsulated into Bolt, and any processing logic can be executed in Bolt. The processing process is no different from ordinary computing applications, but you need to reasonably set up the declaration, distribution and connection of message flows between components according to the computing semantics of Storm. Bolt can receive Tuple messages from one or more Spout, Tuple messages from multiple other Bolt, or Tuple messages sent by a combination of Spout and other Bolt.
Stream Grouping
Storm is used to define the connection, grouping and distribution relationship of flows between various computing components (Spout, Bolt). Storm defines the following seven distribution strategies: Shuffle Grouping (random grouping), Fields Grouping (grouping by field), All Grouping (broadcast grouping), Global Grouping (global grouping), Non Grouping (no grouping), Direct Grouping (direct grouping) and Local or Shuffle Grouping (local / random grouping). The specific meaning of these policies can be easily understood by referring to the official Storm documentation.
Sample code storm-demo
Storm-demo is a code example that contains a complete storm topology, with detailed comments.
For more information on source code, please see https://git.oschina.net/HuQingmiao/storm-demo.git.
How to get the storm program running
# when the local mode is developed locally, you don't need to deploy storm, just run it under eclipse or IntelliJ idea to facilitate debugging. It can also be executed from the command line: java-jar jar file name main entry class
# production mode first package your application into jar package, but do not include storm and related log packages in jar package. Set the scope of storm and related log packages to provided:
Org.apache.storm storm-core 0.9.5 provided
Then upload the application jar package to the storm node (Nimbus), and then execute the instructions on the node:
Storm jar your application .jar Main entry class parameter (topologId) or jstorm jar your application .jar Main entry class parameter (topologId)
To stop this topology in production mode, execute the following command:
Storm kill parameters (topologId) or jstorm kill parameters (topologId) thank you for reading, these are the contents of "what are the basic knowledge points of Storm programming?" after the study of this article, I believe you have a deeper understanding of what the basic knowledge points of Storm programming have, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.