Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

I. deployment and use of zookeeper--

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Deployment zookeeper1, resource planning server bigdata121/192.168.50.121,bigdata122/192.168.50.122,bigdata123/192.168.50.123zookeeper version 3.4.10 system version centos7.22, cluster deployment

(1) install zk

[root@bigdata121 modules] # cd / opt/modules/zookeeper-3.4.10 [root@bigdata121 zookeeper-3.4.10] # mkdir zkData [root@bigdata121 zookeeper-3.4.10] # mv conf/zoo_sample.cfg conf/zoo.cfg

(2) modify zoo.cfg configuration

# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use / tmp for storage / tmp here is just # example sakes.#dataDir=/tmp/zookeeper# specifies the directory where zk stores data dataDir=/opt/modules/zookeeper-3.4.10/zkData# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots To retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable autopurge feature#autopurge.purgeInterval=1# here is the key configuration # cluster##server.1=bigdata121:2888:3888server.2=bigdata122:2888:3888server.3=bigdata123:2888:3888

Interpretation of cluster configuration parameters:

Server.A=B:C:D .

An is a number that indicates which server this is, that is, sid

B is the ip address of this server

C is the port on which this server exchanges information with the Leader servers in the cluster; it is not an external service port (the external service port is 2181 by default)

D is in case the Leader server in the cluster dies, a port is needed for re-election to select a new Leader, and this port is the port through which the servers communicate with each other when performing the election.

Copy the entire configured program directory to another machine and use either scp or rsync. It's up to you.

(3) specify the server id

In the directory specified by the previously configured dataDir, create a "myid" file, the contents of which will be written to the id of the current server, which is the unique identity in the zk cluster. And this id needs to be the same as specified in the cluster in the previous configuration file, otherwise an error will be reported.

(4) configure environment variables

Vim / etcmax profile.dAccord zookeeper.shroud export ZOOKEEPER_HOME=/opt/modules/zookeeper-3.4.10export PATH=$ {ZOOKEEPER_HOME} / bin:$PATH and then source / etc/profile.d/zookeeper.sh

(5) start

Execute on three machines

Startup: zkServer.sh start to view the status of zk on the current host: zkServer.sh status [root@bigdata121 conf] # zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: / opt/modules/zookeeper-3.4.10/bin/../conf/zoo.cfgMode: follower II. Common commands

Use zkCli.sh to enter the native zk service.

You can use the following command:

The command function help displays all commands to help ls path [watch] use the ls command to see what is contained in the current znode, and the following watch means to listen for changes in the child nodes under that node. Note that the monitoring will expire after it is triggered once. If you need to listen continuously, you need to listen again after each trigger. Ls2 path [watch] to view the current node data and see data such as the number of updates, similar to the ls-lCreate normal creation (permanent node)-s in Linux contains a sequence and adds a series of sequence numbers after the node name. It is often used in the case of node name conflicts-e create a temporary node get path [watch] to get the value of the node. The following watch indicates a change in the value that is listening to the node. Set path value sets the specific value of the node Stat View Node status rmr path Recursive deletion Node 3, zk api usage (java)

1. Maven dependency

Org.apache.zookeeper zookeeper 3.4.10

2. Create a zk client

Import org.apache.zookeeper.*;import org.apache.zookeeper.data.Stat;import org.junit.Before;import org.junit.Test;import java.io.IOException;import java.util.List;public class ZkTest {public static String connectString = "bigdata121:2181,bigdata122:2181,bigdata123:2181"; public static int sessionTimeout = 2000; public ZooKeeper zkClient = null @ Before public void init () throws IOException {/ / create zk client zkClient = new ZooKeeper (connectString, sessionTimeout, new Watcher () {/ / returns the handler function when listening for events, which are one-time public void process (WatchedEvent watchedEvent) {System.out.println (watchedEvent.getState () + "," + watchedEvent.getType () + "," + watchedEvent.getPath () Try {zkClient.getChildren ("/", true);} catch (KeeperException e) {e.printStackTrace ();} catch (InterruptedException e) {e.printStackTrace ();};}}

3. Create a node

Public void create () {/ / create a node with the following parameters: node name, node value, permission node type / that is, / wangjin tao open permission persistence node try {String s = zkClient.create ("/ wangjin", "tao" .getBytes (), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT) } catch (KeeperException e) {System.out.println ("node thanks thanks!");} catch (InterruptedException e) {e.printStackTrace ();}}

4. Get the child node

Zkclient.getChildren (path, listening or not) returns a list of child nodes, for example: public void getChildNode () {try {List children = zkClient.getChildren ("/", false); for (String node: children) {System.out.println (node);}} catch (KeeperException e) {System.out.println ("node not monitoring!") } catch (InterruptedException e) {e.printStackTrace ();}}

5. Judge whether the node exists or not

Zkclient.exists (path, listening or not) returns the status information of the node. If it is null, it means that the node does not have an example: public void nodeExist () {/ / returns the status information of the node; if it is null, it means that the node does not have try {Stat stat = zkClient.exists ("/ king", false); System.out.println (stat = = null? "No": "Yes");} catch (KeeperException e) {System.out.println ("node not exists");} catch (InterruptedException e) {e.printStackTrace ();}} IV. Use zk as a distributed lock instance

1. Maven dependency

Org.apache.curator curator-framework 4.0.0 org.apache.curator curator-recipes 4.0.0 org.apache.curator curator-client 4.0.0 com.google.guava guava 16.0.1

2. Demand

To simulate the second-kill scenario of panic buying, the quantity of goods needs to be locked.

3. Code

Import org.apache.curator.RetryPolicy;import org.apache.curator.framework.CuratorFramework;import org.apache.curator.framework.CuratorFrameworkFactory;import org.apache.curator.framework.recipes.locks.InterProcessMutex;import org.apache.curator.retry.ExponentialBackoffRetry;public class TestDistributedLock {/ / define shared resources private static int count = 10 / used to subtract goods private static void printCountNumber () {System.out.println ("*" + Thread.currentThread (). GetName () + "*"); System.out.println ("current value:" + count); count--; / / sleep for 2 seconds try {Thread.sleep } catch (InterruptedException e) {/ / TODO Auto-generated catch block e.printStackTrace ();} System.out.println ("*" + Thread.currentThread (). GetName () + "*") } public static void main (String [] args) {/ / define the policy for client retries RetryPolicy policy = new ExponentialBackoffRetry (1000, / / waiting time 10) / / maximum number of retries / / define a client of ZK CuratorFramework client = CuratorFrameworkFactory.builder () .connectString ("bigdata121:2181") .retryPolicy (policy) .build (); / / client object connects to zk client.start () / / to create a mutex is to create a node final InterProcessMutex lock = new InterProcessMutex (client, "/ mylock") on the zk; / / start 10 threads to access the shared resource for (int I = 0; I < 10) New Runnable +) {new Thread (new Runnable () {public void run () {try {/ / request lock lock.acquire (); / / access shared resource printCountNumber ()) } catch (Exception ex) {ex.printStackTrace ();} finally {/ / release lock try {lock.release () } catch (Exception e) {e.printStackTrace ();}) .start ();}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report