I. deploy zookeeper
1. Resource planning
The server | bigdata121/192.168.50.121,bigdata122/192.168.50.122,bigdata123/192.168.50.123 |
---|---|
zookeeper version | 3.4.10 |
System version | centos7.2 |
2. Cluster deployment
(1) install zk
[root@bigdata121 modules]# cd /opt/modules/zookeeper-3.4.10 [root@bigdata121 zookeeper-3.4.10]# mkdir zkData [root@bigdata121 zookeeper-3.4.10]# mv conf/zoo_sample.cfg conf/zoo.cfg
(2) modify zoo.cfg configuration
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. #dataDir=/tmp/zookeeper # Specify the directory where zk stores data dataDir=/opt/modules/zookeeper-3.4.10/zkData # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 # Here is the key configuration #############cluster############################# server.1=bigdata121:2888:3888 server.2=bigdata122:2888:3888 server.3=bigdata123:2888:3888
Interpretation of cluster configuration parameters:
Server.A=B:C:D.
A is a number indicating the server number, that is, sid;
B is the ip address of the server;
C is the port where this server exchanges information with the Leader server in the cluster; it is not the external service port (the default is 2181)
D is in case the Leader server in the cluster hangs up, a port is needed to re elect and select a new Leader, and this port is used to communicate with each other when the election is executed.
Copy the entire configured program directory to other machines, use scp or rsync, and do it by yourself
(3) specify the server id
Create a "myid" file in the directory specified by the dataDir configured previously, and the contents will be written to the id of the current server, which is the only id in the zk cluster. And this id needs to be the same as that specified in the cluster in the previous configuration file, otherwise an error will be reported.
(4) configure environment variables
vim /etc/profile.d/zookeeper.sh #!/bin/bash export ZOOKEEPER_HOME=/opt/modules/zookeeper-3.4.10 export PATH=${ZOOKEEPER_HOME}/bin:$PATH //Then? source /etc/profile.d/zookeeper.sh
(5) start
On three machines
Start up: zkServer.sh start //View the status of zk on the current host: zkServer.sh status [root@bigdata121 conf]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/modules/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower
II. Common commands
Use zkCli.sh to enter the local zk service.
You can use the following command:
command | function |
---|---|
help | Show all command help |
ls path [watch] | Use the ls command to view the content contained in the current znode, and the watch after it means to listen for the changes of the sub nodes of the node. Note: once triggered, the monitor will fail. If you need to monitor continuously, you need to monitor again after each trigger |
ls2 path [watch] | View the current node data and see the update times and other data, similar to ls-l in Linux |
Create | Common creation (permanent node) - s contains a sequence, and a sequence number will be added after the node name. It is often used in case of node name conflict - e creates a temporary node |
get path [watch] | Gets the value of the node. The following watch indicates listening for the value change of the node. |
Set path value | Set the specific value of the node |
Stat | View node status |
rmr path | Recursively delete nodes |
III. use of zk api (java)
1. maven dependence
<dependencies> <dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.4.10</version> </dependency> </dependencies>
2. Create zk client
import org.apache.zookeeper.*; import org.apache.zookeeper.data.Stat; import org.junit.Before; import org.junit.Test; import java.io.IOException; import java.util.List; public class ZkTest { public static String connectString = "bigdata121:2181,bigdata122:2181,bigdata123:2181"; public static int sessionTimeout = 2000; public ZooKeeper zkClient = null; @Before public void init() throws IOException { //Create zk client zkClient = new ZooKeeper(connectString, sessionTimeout, new Watcher() { //Returns the processing function of the listening event. The listening event is one-time public void process(WatchedEvent watchedEvent) { System.out.println(watchedEvent.getState() + "," + watchedEvent.getType() + "," + watchedEvent.getPath()); try { zkClient.getChildren("/", true); } catch (KeeperException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } } }); } }
3. Create node
public void create() { //Create node with parameter: node name node value permission node type //Open permission persistence node try { String s = zkClient.create("/wangjin", "tao".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException e) { System.out.println("node exists!!!"); } catch (InterruptedException e) { e.printStackTrace(); } }
4. Get child nodes
zkclient.getChildren(Path, listen or not) //Returns a list of child nodes //Example: public void getChildNode() { try { List<String> children = zkClient.getChildren("/", false); for (String node : children) { System.out.println(node); } } catch (KeeperException e) { System.out.println("node not exists!!!"); } catch (InterruptedException e) { e.printStackTrace(); } }
5. Judge whether the node exists
zkclient.exists(path, Whether to monitor) //The status information of the node is returned. If it is null, the node does not exist //Example: public void nodeExist() { //The status information of the node is returned. If it is null, the node does not exist try { Stat stat = zkClient.exists("/king", false); System.out.println(stat == null ? "No," : "Yes"); } catch (KeeperException e) { System.out.println("node not exists"); } catch (InterruptedException e) { e.printStackTrace(); } }
IV. using zk as a distributed lock instance
1. maven dependence
<dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-framework</artifactId> <version>4.0.0</version> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-recipes</artifactId> <version>4.0.0</version> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-client</artifactId> <version>4.0.0</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>16.0.1</version> </dependency>
2, demand
In order to simulate the scene of flash buying, it is necessary to lock the quantity of goods.
3, code
import org.apache.curator.RetryPolicy; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.CuratorFrameworkFactory; import org.apache.curator.framework.recipes.locks.InterProcessMutex; import org.apache.curator.retry.ExponentialBackoffRetry; public class TestDistributedLock { //Define shared resources private static int count = 10; //Used to subtract goods private static void printCountNumber() { System.out.println("***********" + Thread.currentThread().getName() + "**********"); System.out.println("Current value:" + count); count--; //Sleep for 2 seconds. try { Thread.sleep(500); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println("***********" + Thread.currentThread().getName() + "**********"); } public static void main(String[] args) { //Define policy for client retries RetryPolicy policy = new ExponentialBackoffRetry(1000, //Time to wait each time 10); //Maximum retries //Define a client of ZK CuratorFramework client = CuratorFrameworkFactory.builder() .connectString("bigdata121:2181") .retryPolicy(policy) .build(); //Client object connection zk client.start(); //To create a mutex is to create a node on zk final InterProcessMutex lock = new InterProcessMutex(client, "/mylock"); // Start 10 threads to access shared resources for (int i = 0; i < 10; i++) { new Thread(new Runnable() { public void run() { try { //Request to get lock lock.acquire(); //access to shared resources printCountNumber(); } catch (Exception ex) { ex.printStackTrace(); } finally { //Release lock try { lock.release(); } catch (Exception e) { e.printStackTrace(); } } } }).start(); } } }