In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the knowledge of "how to achieve client-side load balancing in elasticsearch". Many people will encounter this dilemma in the operation of actual cases. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
The client load balancing technology is that the client maintains a set of server references, and each client request will select a node according to the load balancing algorithm and send the request. The commonly used load algorithms are Random,Round robin,Hash,StaticWeighted and so on. The client load of ES uses the Round robin algorithm. (in addition, the Hash consistency algorithm will meet in another place.) the invocation process of the entire client module of a Count request is
Simplified invocation process
Client provides the client interface, such as count ()
The execute () of nodesService comes out randomly with one node.
The Proxy agent sends the request through transportService
Something initialized.
Let's first take a look at the code to create the client, where several configuration items are configured
Settings settings = ImmutableSettings.settingsBuilder () .put ("cluster.name", "myClusterName") .put ("client.transport.sniff", true) .build (); client=new TransportClient (settings) .addTransportAddress (new InetSocketTransportAddress ("localhost", 9300))
Who cares about these configuration items? they are the TransportClientNodesService class. It is responsible for sniffing and maintaining a list of cluster nodes. Election node. Some initialization work has been done in its constructor
This.nodesSamplerInterval = componentSettings.getAsTime ("nodes_sampler_interval", timeValueSeconds (5)); this.pingTimeout = componentSettings.getAsTime ("ping_timeout", timeValueSeconds (5)). Millis (); this.ignoreClusterName = componentSettings.getAsBoolean ("ignore_cluster_name", false); / /. If (componentSettings.getAsBoolean ("sniff", false)) {this.nodesSampler = new SniffNodesSampler ();} else {this.nodesSampler = new SimpleNodeSampler ();} this.nodesSamplerFuture = threadPool.schedule (nodesSamplerInterval, ThreadPool.Names.GENERIC, new ScheduledNodeSampler ())
Interval between nodes_sampler_interval sniffing cluster nodes. Default is 5 seconds.
Timeout of pingTimeout Ping node. Default is 5 seconds.
IgnoreClusterName ignores the cluster name. When verifying the cluster,
Whether sniff enables cluster sniffing
In addition, TransportClientNodesService maintains two lists, the cluster node list nodes and the listening list listedNodes. The listener list is the list of nodes added by TransportClient.addTransportAddress ().
Sniffing cluster nodes
Interface NodeSampler {void sample ();}
The NodeSampler interface is simple, with only one sample () method, and its implementation class has two SniffNodesSampler and SimpleNodeSampler, as we have seen in initialization, using the SniffNodesSampler class if the "sniff" configuration item is true. The implementation logic of both of them is
SimpleNodeSampler
Loop every node in the listedNodes list
Connect to this node without a connection
Send a "cluster/nodes/info" request to get the cluster name of the node, and verify the cluster name if necessary.
Add to the list of nodes nodes
SniffNodesSampler
Create a de-duplicated list of listedNodes and nodes nodesToPing
Every node in the loop nodesToPing
Connect to this node without a connection. If it is in the nodes list, connect normally. Just establish a light connection in the listedNode list.
Send a "cluster/state" request to get the status of the node. There will be dataNodes of all the data nodes in the cluster.
Reconfirm that connections have been established with all nodes
Add to the list of nodes nodes
We can find that the final node list of SimpleNodeSampler is still listedNodes. If we add only one localhost when we set up the client, then all its requests will be sent to localhost. Only SniffNodesSampler probes all nodes in the cluster. That is, the intention of SimpleNodeSampler is to allow certain nodes in the cluster to be dedicated to accepting user requests. If SniffNodesSampler, all nodes will participate in the load.
Class ScheduledNodeSampler implements Runnable {@ Override public void run () {try {nodesSampler.sample (); if (! closed) {nodesSamplerFuture = threadPool.schedule (nodesSamplerInterval, ThreadPool.Names.GENERIC, this);}} catch (Exception e) {logger.warn ("failed to sample", e) }}}
After the ScheduledNodeSampler thread starts, Sampler gets busy.
Election node
With the list of cluster nodes, the execute () method can select nodes by polling the scheduling algorithm Round robin. The characteristic of the algorithm is that it is simple and elegant to implement, and the request will be sent to each node evenly.
Public T execute (NodeCallback callback) throws ElasticSearchException {ImmutableList nodes = this.nodes; if (nodes.isEmpty ()) {throw new NoNodeAvailableException ();} int index = randomNodeGenerator.incrementAndGet (); if (index < 0) {index = 0; randomNodeGenerator.set (0);} for (int I = 0; I < nodes.size ()) Index +) {DiscoveryNode node = nodes.get ((index + I)% nodes.size ()); try {return callback.doWithNode (node);} catch (ElasticSearchException e) {if (! (e.unwrapCause () instanceof ConnectTransportException)) {throw e } throw new NoNodeAvailableException ();}
The key is this line of code, say so many words, in fact, I just for this, a line of code ah?
DiscoveryNode node = nodes.get ((index + I)% nodes.size ()); "how to achieve client-side load balancing with elasticsearch" ends here. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.