In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article focuses on "how to understand the distributed scheduling framework Elastic-job", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to understand the distributed scheduling framework Elastic-job.
I. introduction
The reason for this result is that Quartz realizes that there is only one valid service node to run the service through database locking in the distributed cluster environment, so as to ensure that the scheduled tasks of the service in the cluster environment will not be invoked repeatedly!
If there are few scheduled tasks to run, it won't be a big problem to use Quartz, but if there is such a demand now, such as a wealth management product, the system needs to calculate the income of each account at 6 o'clock every day. If this financial product, there are hundreds of millions of users, if they all run on a service instance, they may not be able to finish this task the next day!
There are many similar scenarios, it is obvious that Quartz is very difficult to meet our high-volume, long task execution cycle task scheduling!
Therefore, Dangdang has developed an Elastic-Job timed task framework based on Quartz, which is suitable for efficient use of server resources in a distributed environment.
The biggest highlight of Elastic-Job-Lite is its support for elastic expansion and reduction. How do you achieve this?
For example, now there is a task to be executed, if the task is divided into 10, then it can be executed in parallel on 10 service instances at the same time, without affecting each other, thus greatly improving the efficiency of task execution and making full use of server resources!
For the above wealth management products, if this task needs to deal with 100 million users, then we can expand horizontally, such as dividing the task into 500, so that 500 service instances can run at the same time, each service instance handles 200000 pieces of data. if nothing happens, you can finish running in 1-2 hours, and if the time is still long, you can continue to expand horizontally and add service instances to run!
In 2015, Dangdang opened it up, which instantly attracted the attention of a large number of programmers and ranked first in open source China!
Let's take a look at this widely used distributed scheduling framework.
II. Introduction to the project architecture
Elastic-Job started with only one elastic-job-core project, positioning lightweight, non-centralized, the core service is to support elastic expansion and data sharding!
Since version 2.x, it is mainly divided into two sub-projects: Elastic-Job-Lite and Elastic-Job-Cloud.
Among them, Elastic-Job-Lite is positioned as a lightweight centerless solution, which makes it possible to use the form of jar packet to provide distributed service.
On the other hand, Elastic-Job-Cloud uses Mesos + Docker solution to provide additional services such as resource governance, application distribution and process isolation (the only difference from Lite is that they use the same API, as long as they are developed once).
Today we mainly introduce Elastic-Job-Lite. The main features are as follows:
Distributed scheduling coordination: zookeeper is used to realize the registration center for unified scheduling.
Support task slicing: slicing the tasks that need to be executed to achieve parallel scheduling.
Support flexible capacity expansion and reduction: after dividing the task into n task items, each server executes the assigned task items respectively. Once a new server joins the cluster, or the existing server goes offline, elastic-job will trigger task refragmentation before the next task starts while leaving the execution of this task unchanged.
Of course, there are also functions such as failure transfer, missed job re-trigger, and so on. You can visit the official website documentation for more details.
Applications perform tasks on their respective nodes and are coordinated through the zookeeper registry. Node registration, node election, task slicing and monitoring are all done in the code of E-Job. The following picture is the architecture diagram provided by the official website.
There is no need to say any more, let's introduce it directly through practice, it is easier to understand how to play inside!
III. Application practice
3. 1. Zookeeper installation
Elastic-job-lite, is directly dependent on zookeeper, so before development, we need to prepare the corresponding zookeeper environment, about the installation process of zookeeper, do not say much, very simple, there are tutorials on the Internet!
3.2. elastic-job-lite-console installation
Elastic-job-lite-console is mainly a visual interface management system for task jobs.
Can be deployed separately, independent of the platform, mainly through the configuration of registries and data sources to grab data.
The way to get it is also very simple, directly access the https://github.com/apache/shardingsphere-elasticjob address, then switch to the version number of 2.1.5, then execute mvn clean install to package, obtain the corresponding installation package and decompress it, and start the service in the bin folder!
If your network speed is as slow as a snail, another way is to get the corresponding source code from this address https://gitee.com/elasticjob/elastic-job!
After starting the service, visit http://127.0.0.1:8899 in the browser and enter your account and password (all root) to enter the console page, similar to the following interface!
After entering, configure the zookeeper registry above, including the data source of the database mysql.
3.3. Create a project
This article uses springboot to build a project as an example, create a project and add elastic-job-lite dependencies!
Com.dangdang elastic-job-lite-core 2.1.5 com.dangdang elastic-job-lite-spring 2.1.5
Configure the zookeeper registry information in advance in the configuration file application.properties!
# zookeeper config zookeeper.serverList=127.0.0.1:2181 zookeeper.namespace=example-elastic-job-test
Create a new ZookeeperConfig configuration class
@ Configuration @ ConditionalOn_Expression ("'${zookeeper.serverList} '.length () > 0") public class ZookeeperConfig {/ * zookeeper configuration * @ return * / @ Bean (initMethod = "init") public ZookeeperRegistryCenter zookeeperRegistryCenter (@ Value ("${zookeeper.serverList}") String serverList @ Value ("${zookeeper.namespace}") String namespace) {return new ZookeeperRegistryCenter (new ZookeeperConfiguration (serverList,namespace)) }}
3.5. Create a new task processing class
Elastic-job supports three types of job task processing!
Simple type jobs: the Simple type is used for general tasks, and only needs to implement the SimpleJob interface. This interface provides only a single method for overriding, which is executed on a regular basis, similar to the Quartz native interface.
Dataflow type job: the Dataflow type is used to handle data flow and needs to implement the DataflowJob interface. This interface provides two methods for overriding, which are used to fetchData and processData data, respectively.
Script type job: Script type job means script type job, which supports all types of scripts, such as shell,python,perl. You just need to configure scriptCommandLine through the console or code, without coding. The execution script path can contain parameters, and after the parameters are passed, the job framework automatically appends the last parameter to the job runtime information.
3.6. create a new Simple type job
Write an implementation class MySimpleJob of the SimpleJob interface, and the current work is mainly to print a log.
@ Slf4j public class MySimpleJob implements SimpleJob {@ Override public void execute (ShardingContext shardingContext) {log.info ("Thread ID:% s, total number of job shards:% s," + "current shard item:% s. Current parameter:% s, "+" job name:% s. Job customization parameters:% s ", Thread.currentThread () .getId (), shardingContext.getShardingTotalCount (), shardingContext.getShardingItem (), shardingContext.getShardingParameter (), shardingContext.getJobName (), shardingContext.getJobParameter ());}}
Create a MyElasticJobListener task listener to listen to the task execution of the MySimpleJob.
@ Slf4j public class MyElasticJobListener implements ElasticJobListener {private long beginTime = 0; @ Override public void beforeJobExecuted (ShardingContexts shardingContexts) {beginTime = System.currentTimeMillis (); log.info ("= = > {} MyElasticJobListener BEGIN TIME: {} {} MyElasticJobListener END TIME: {}, TOTAL CAST: {}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.