Zookeeper learning - basic use and cluster building of zookeeper
This chapter records the environment construction and basic api use of Zookeeper. Part of the summary is from Zookeeper's official website. The version of Zookeeper used this time is 3.4.9. Here, I installed VMWare on this machine and created three virtual machines for the convenience of cluster building.
Article catalog
Basic introduction
ZooKeeper: A Distributed Coordination Service for Distributed Applications
ZooKeeper is a distributed, open-source coordination service for distributed applications. It exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program to, and uses a data model styled after the familiar directory tree structure of file systems. It runs in Java and has bindings for both Java and C.
Coordination services are notoriously hard to get right. They are especially prone to errors such as race conditions and deadlock. The motivation behind ZooKeeper is to relieve distributed applications the responsibility of implementing coordination services from scratch.
zookeeper is a distributed coordination service for distributed applications
ZooKeeper is a distributed, open-source coordination service for distributed applications. It exposes a set of simple primitives by which distributed applications can implement advanced services for synchronization, configuration maintenance, and group and naming. It is designed to be easy to program and uses the data model of tree structure style which is common in file system. It runs in Java and has bindings for Java and C.
Coordinating services is notoriously difficult. They are particularly prone to errors such as race conditions and deadlocks. The motivation behind ZooKeeper is to reduce the responsibility of distributed applications to implement coordinated services from scratch.
Here are some features of zookeeper
Guarantees
ZooKeeper is very fast and very simple. Since its goal, though, is to be a basis for the construction of more complicated services, such as synchronization, it provides a set of guarantees. These are:
- Sequential Consistency - Updates from a client will be applied in the order that they were sent.
- Atomicity - Updates either succeed or fail. No partial results.
- Single System Image - A client will see the same view of the service regardless of the server that it connects to.
- Reliability - Once an update has been applied, it will persist from that time forward until a client overwrites the update.
- Timeliness - The clients view of the system is guaranteed to be up-to-date within a certain time bound.
Literal translation:
guarantee
ZooKeeper is very fast and simple. However, because its goal is to build the foundation for more complex services, such as synchronization, it provides a set of guarantees. These are:
Sequential consistency - updates from clients are applied in the order they are sent.
- Atomicity - update succeeded or failed. No partial results.
- Single system image - clients will see the same view of services, regardless of which server it is connected to.
- Reliability - once the update is applied, it will continue from then on until the client overwrites the update.
- Timeliness - the customer's view of the system is to ensure that it is up-to-date within a certain time frame.
Server download
Go to the Zookeeper official website to download Zookeeper version 3.4.9 and extract it to the virtual machine directory.
Server operation and basic operation
#Start zkServer. sh zkServer.sh start #View zkServer status sh zkServer.sh status #Mode: standalone indicates the standard mode of non cluster startup
#To connect to zkServer, you can use the zkCli tool provided by zookeeper. Here - server plus ip: port can be omitted.
sh zkCli.sh -server 127.0.0.1:2181
#You can then see that it is connected.
#By executing ls / to view the Path under the root node[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
#View the stat information of Path: zookeeper by executing get zookeeper
[zk: localhost:2181(CONNECTED) 2] get /zookeeper
cZxid = 0x0
ctime = Thu Jan 01 08:00:00 CST 1970
mZxid = 0x0
mtime = Thu Jan 01 08:00:00 CST 1970
pZxid = 0x0
cversion = -1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 1
For other crud operations, please refer to the following official website summary:
Next, create a new znode by running
create /zk_test my_data
. This creates a new znode and associates the string "my_data" with the node. You should see:[zkshell: 9] create /zk_test my_data Created /zk_test
Issue another ls / command to see what the directory looks like:
[zkshell: 11] ls / [zookeeper, zk_test]
Notice that the zk_test directory has now been created.
Next, verify that the data was associated with the znode by running the get command, as in:
[zkshell: 12] get /zk_test my_data cZxid = 5 ctime = Fri Jun 05 13:57:06 PDT 2009 mZxid = 5 mtime = Fri Jun 05 13:57:06 PDT 2009 pZxid = 5 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0 dataLength = 7 numChildren = 0
We can change the data associated with zk_test by issuing the set command, as in:
[zkshell: 14] set /zk_test junk cZxid = 5 ctime = Fri Jun 05 13:57:06 PDT 2009 mZxid = 6 mtime = Fri Jun 05 14:01:52 PDT 2009 pZxid = 5 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0 dataLength = 4 numChildren = 0 [zkshell: 15] get /zk_test junk cZxid = 5 ctime = Fri Jun 05 13:57:06 PDT 2009 mZxid = 6 mtime = Fri Jun 05 14:01:52 PDT 2009 pZxid = 5 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0 dataLength = 4 numChildren = 0
(Notice we did a get after setting the data and it did, indeed, change.
Finally, let's delete the node by issuing:
[zkshell: 16] delete /zk_test [zkshell: 17] ls / [zookeeper] [zkshell: 18]
Client connection
Client connection building
/** * @author: to_be_continued * @Date: 2020/6/22 10:36 */ public class TestZk { private static ZooKeeper zooKeeper; public static void main(String[] args) throws IOException { //Initialize the zookeeper client, and TestZkWatch is used to test the watch mechanism zooKeeper = new ZooKeeper("192.168.80.128:2181", 5000, new TestZkWatch()); System.in.read(); } } /** * ZK watch for testing * @author to_be_continued */ class TestZkWatch implements Watcher { @Override public void process(WatchedEvent watchedEvent) { System.out.println("testZkWatch watchedEvent: " + watchedEvent); } }
As shown above, the client connection of zookeeper is initialized. Next, use the most common Crud operation.
Create node
/** * Create node * @param path */ public static String create(String path, byte[] data) throws KeeperException, InterruptedException { //Create a node, where ACL and node Mode are specified. return zooKeeper.create(path, data, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } public static void main(String[] args) throws IOException, KeeperException, InterruptedException { //Initialize zookeeper client zooKeeper = new ZooKeeper("192.168.80.128:2181", 5000, new TestZkWatch()); //Create a testZk node. The node data information is testCreateData. After creation, you can see the "process execution" of TestZkWatch create("/testZk", "testCreateData".getBytes()); System.in.read(); }
Query node
public static void main(String[] args) throws IOException, KeeperException, InterruptedException { //Initialize zookeeper client zooKeeper = new ZooKeeper("192.168.80.128:2181", 5000, new TestZkWatch()); //Create a testZk node. The node data information is testCreateData // create("/testZk", "testCreateData".getBytes()); //Query testZk node information Stat stat = new Stat(); System.out.println(new String(getData("/testZk", stat))); System.out.println(stat); System.in.read(); } /** * Query node information * @param path * @param stat * @return * @throws KeeperException * @throws InterruptedException */ public static byte[] getData(String path, Stat stat) throws KeeperException, InterruptedException { //Here, the watch passes in true to indicate that the default watch is used return zooKeeper.getData(path, true, stat); }
Modify node
public static void main(String[] args) throws IOException, KeeperException, InterruptedException { //Initialize zookeeper client zooKeeper = new ZooKeeper("192.168.80.128:2181", 5000, new TestZkWatch()); //Create a testZk node. The node data information is testCreateData // create("/testZk", "testCreateData".getBytes()); //Query testZk node information Stat stat = new Stat(); System.out.println(new String(getData("/testZk", stat))); System.out.println(stat); //Update testZk node information System.out.println(update("/testZk", "updateData".getBytes(), stat.getVersion())); System.out.println(new String(getData("/testZk", stat))); System.out.println(stat); System.in.read(); } /** * Update specified node data information * @param path * @param data * @param version Version number, optimistic lock mechanism * @return stat * @throws KeeperException * @throws InterruptedException */ public static Stat update(String path, byte[] data, int version) throws KeeperException, InterruptedException { return zooKeeper.setData(path, data, version); }
Delete node
public static void main(String[] args) throws IOException, KeeperException, InterruptedException { //Initialize zookeeper client zooKeeper = new ZooKeeper("192.168.80.128:2181", 5000, new TestZkWatch()); //Create a testZk node. The node data information is testCreateData // create("/testZk", "testCreateData".getBytes()); //Query testZk node information Stat stat = new Stat(); System.out.println(new String(getData("/testZk", stat))); System.out.println(stat); delete("/testZk", stat.getVersion()); // //Update testZk node information // System.out.println(update("/testZk", "updateData".getBytes(), stat.getVersion())); // // System.out.println(new String(getData("/testZk", stat))); // System.out.println(stat); System.in.read(); } /** * Delete the specified node * @param path * @param version * @throws KeeperException * @throws InterruptedException */ public static void delete(String path, int version) throws KeeperException, InterruptedException { zooKeeper.delete(path, version); }
ACL and CreateMode
You can see that when creating, ACL and create mode are specified.
ACL: zookeeper support znode set up access control(Access control)
ACL Permissions
ZooKeeper supports the following permissions:
- CREATE: you can create a child node
- READ: you can get data from a node and list its children.
- WRITE: you can set data for a node
- DELETE: you can delete a child node
- ADMIN: you can set permissions
CreateMode: by zookeeper Supported node models.
EPHEMERAL Temporary node, node is deleted when session is disconnected
EPHEMERAL_SEQUENTIAL Temporary orderly node, when the session is disconnected, znode Will be removed and its name will be appended with a monotonically increasing number.
PERSISTENT Persistent node, node when session is disconnectedcan'tDeleted
PERSISTENT_SEQUENTIAL Persistent ordered node. When the session is disconnected, znodecan'tIf it is deleted, its name will be appended with a monotonically increasing number.
TTL(Added in 3.6.0) TTL Node is 3.6 A new type can be added in the future to support setting expiration time. It can be understood as an attribute of a node. TTL Only supported in Persistent and Persistent Sequence Settings on.
TTL Nodes
Added in 3.6.0
When creating PERSISTENT or PERSISTENT_SEQUENTIAL znodes, you can optionally set a TTL in milliseconds for the znode. If the znode is not modified within the TTL and has no children it will become a candidate to be deleted by the server at some point in the future.
Note: TTL Nodes must be enabled via System property as they are disabled by default. See the Administrator's Guide for details. If you attempt to create TTL Nodes without the proper System property set the server will throw KeeperException.UnimplementedException.
CONTAINER (Added in 3.6.0) container node is a new special node added after 3.6. When all the child nodes of the container node are deleted, the container node is deleted.
Container Nodes
Added in 3.6.0
ZooKeeper has the notion of container znodes. Container znodes are special purpose znodes useful for recipes such as leader, lock, etc. When the last child of a container is deleted, the container becomes a candidate to be deleted by the server at some point in the future.
Given this property, you should be prepared to get KeeperException.NoNodeException when creating children inside of container znodes. i.e. when creating child znodes inside of container znodes always check for KeeperException.NoNodeException and recreate the container znode when it occurs.
Using the cursor plug-in
What is Curator?
Curator n ˈkyoor͝ˌātər: a keeper or custodian of a museum or other collection - A ZooKeeper Keeper.
Apache Curator is a Java/JVM client library for Apache ZooKeeper, a distributed coordination service. It includes a highlevel API framework and utilities to make using Apache ZooKeeper much easier and more reliable. It also includes recipes for common use cases and extensions such as service discovery and a Java 8 asynchronous DSL.
Literal translation:
What is the cursor?
Apache creator is the Java/JVM client library of Apache ZooKeeper (distributed coordination service). It includes an advanced API framework and tools to make using Apache ZooKeeper easier and more reliable. It also includes recipes for common use cases and extensions such as service discovery and Java 8 asynchronous DSLs.
create and getData
/** * Using the cursor api can reduce many inconveniences of using the native api, such as multi-level node creation, a large number of declarative exceptions, etc. * @author: tu * @Date: 2020/6/22 11:22 */ public class TestCurator { public static void main(String[] args) throws Exception { CuratorFramework curatorFramework = CuratorFrameworkFactory.newClient("192.168.80.128:2181", new ExponentialBackoffRetry(1000, 3)); curatorFramework.start(); String path = "/testCurator"; System.out.println(curatorFramework.create().forPath(path, "testCurator".getBytes())); System.out.println(curatorFramework.getData().forPath(path)); System.in.read(); } }
Distribute Lock
You can use the cursor API to implement distributed locks.
InterProcessMutex lock = new InterProcessMutex(client, lockPath); if ( lock.acquire(maxWait, waitUnit) ) { try { // do some work inside of the critical section here } finally { lock.release(); } }
Leader Election
You can use the cursor API to implement elections
LeaderSelectorListener listener = new LeaderSelectorListenerAdapter() { public void takeLeadership(CuratorFramework client) throws Exception { // this callback will get called when you are the leader // do whatever leader work you need to and only exit // this method when you want to relinquish leadership } } LeaderSelector selector = new LeaderSelector(client, path, listener); selector.autoRequeue(); // not required, but this is behavior that you will probably expect selector.start();
Cluster deployment
Deployment introduction
Cluster deployment test suggests that multiple machines should be deployed separately, and other configurations such as dataDir and port need to be modified when deploying the same machine. Please refer to the official website.
If you want to test multiple servers on a single machine, specify the servername as localhost with unique quorum & leader election ports (i.e. 2888:3888, 2889:3889, 2890:3890 in the example above) for each server.X in that server's config file. Of course separate _dataDir_s and distinct _clientPort_s are also necessary (in the above replicated example, running on a single localhost, you would still have three config files).
Please be aware that setting up multiple servers on a single machine will not create any redundancy. If something were to happen which caused the machine to die, all of the zookeeper servers would be offline. Full redundancy requires that each server have its own machine. It must be a completely separate physical server. Multiple virtual machines on the same physical host are still vulnerable to the complete failure of that host.
For the cluster deployment strategy, the official recommendation is at least 3 servers and it is strongly recommended to use an odd number of servers.
For replicated mode, a minimum of three servers are required, and it is strongly recommended that you have an odd number of servers. If you only have two servers, then you are in a situation where if one of them fails, there are not enough machines to form a majority quorum. Two servers are inherently less stable than a single server, because there are two single points of failure.
For replication mode, at least three servers are required, and it is highly recommended to use an odd number of servers. If you have only two servers, you are in a situation where if one of the servers fails, there are not enough machines to form the majority quorum. Two servers are inherently more volatile than one because there are two single points of failure.
Configuration modification
Conf needs to be modified in cluster mode/ zoo.cfg Documents. Configure other server information under the cluster.
tickTime=2000 dataDir=/var/lib/zookeeper clientPort=2181 initLimit=5 syncLimit=2 #Configure cluster information server.1=192.168.80.128:2888:3888 server.2=192.168.80.129:2888:3888 server.3=192.168.80.130:2888:3888
Server is the prefix of configuration, which is fixed. The following. 1 represents the server id of zookeeper. The file with id is stored under dataDir. There will be a file called myid. The inner part of the file is the specific id value. After the equal sign are ip: synchronous communication port: optional communication port. Other configurations can refer to: Configuration Parameters
After the configuration of the three machines is completed, the cluster deployment is completed by starting directly. Through sh zkServer.sh Status you can view the mode status of the server, Follower, Leader or Observer.