Raft Understandable Distributed Consensu

Keywords: Big Data Database network

Raft Understandable Distributed Consensus

So What is Distributed Consensus?
Let's start with an example…

Consensus unanimity; public opinion; unanimity.
LetLettish Leto; allowed, allowed; allowed, allowed.
start with... start
Examples; Examples; Example; Example; Precedent; As uuuuuuuuuuuu An example; for uuuuuuuuuuuu Set an example; give an example; act as an example. Demonstration

Let's say we have a single node system.
For this example, you can think of our node as a database server that stores a single value.
We also have a client that can send a value to the server.
Coming to agreement, or consensus, on that value is easy with one node.

Single; one-to-one; unique; suitable for one person; one-way ticket; single room; two-person rivalry; unmarried man; selection; as a baseman
For this purpose
Examples; Examples; Example; Example; Precedent; As uuuuuuuuuuuu An example; for uuuuuuuuuuuu Set an example; give an example; act as an example. Demonstration
Think of it; think of it; think of it. Thoughts; Right. have an objection or a different opinion
node nodes; plant nodes.
Database server database server
stores store
Single value single value
Coming is coming; the next is coming; will be rewarded; will eat its own fruit; come; arrive; come (the present participle of com); reach; be born
Agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement agreement
consensus unanimity; public opinion; unanimity.
Value value, price; significance, meaning; importance; face value; evaluation; attention, value; evaluation, give (...) Price
Easy is easy; comfortable; comfortable; comfortable; calm; easy; effortless; leisurely; slow; stop paddling; give a stop order

But how do we come to consensus if we have multiple nodes?
That's the problem of distributed consensus
Raft is a protocol for implementing distributed consensus.
Let's look at a high level overview of how it works.
A node can be in 1 of 3 states:
The Follower state,the Candidate state,or the Leader state.

Multiple; multiple; complex; multifunctional; < number > multiple; parallel; chain stores; with multiple branches

All our nodes start in the follower state.
If followers don't hear from a leader then they can become a candidate.
The candidate then requests votes from other nodes.
Nodes will reply with their vote.
The candidate becomes the leader if it gets votes from a majority of nodes.
This process is called Leader Election.

All changes to the system now go through the leader.
Each change is added as an entry in the node's log.
This log entry is currently uncommitted so it won't update the node's value.
To commit the entry the node first replicates it to the follower nodes…then the leader waits until a majority of nodes have written the entry.
The entry is now committed on the leader node and the node state is "5".
The leader then notifies the followers that the entry is committed.
The cluster has now come to consensus about the system state.
This process is called Log Replication.

Leader Election Leader Election

In Raft there are two timeout settings which control elections.
First is the election timeout.
The election timeout is the amount of time a follower waits until becoming a candidate.
The election timeout is randomized to be between 150ms and 300ms.
After the election timeout the follower becomes a candidate and starts a new election term…votes for itself…and sends out Request Vote messages to other nodes.
If the receiving node hasn't voted yet in this term then it votes for the candidate…and the node resets its election timeout.
The leader begins sending out Append Entries messages to its followers.
These messages are sent in intervals specified by the heartbeat timeout.
Followers then respond to each Append Entries message.
This election term will continue until a follower stops receiving heartbeats and becomes a candidate.
Let's stop the leader and watch a re-election happen.
Node A is now leader of term 2.
Requiring a majority of votes guarantees that only one leader can be elected per term.
If two nodes become candidates at the same time then a split vote can occur.
Let's take a look at a split vote example…
Two nodes both start an election for the same term…
Now each candidate has 2 votes and can receive no more for this term.
The nodes will wait for a new election and try again.
Node C received a majority of votes in term 7 so it becomes leader.

Log Replication Daily Branch Replication

Once we have a leader elected we need to replicate all changes to our system to all nodes.
This is done by using the same Append Entries message that was used for heartbeats.
Let's walk through the process.
First a client sends a change to the leader.
The change is appended to the leader's log…then the change is sent to the followers on the next heartbeat.
An entry is committed once a majority of followers acknowledge it…
…and a response is sent to the client.
Now let's send a command to increment the value by "2".
Our system value is now updated to "7".

Raft can even stay consistent in the face of network partitions.
Let's add a partition to separate A & B from C, D & E.
Because of our partition we now have two leaders in different terms.
Let's add another client and try to update both leaders.
One client will try to set the value of node B to "3".
Node B cannot replicate to a majority so its log entry stays uncommitted.
The other client will try to set the value of node C to "8".
This will succeed because it can replicate to a majority.
Now let's heal the network partition.
Node B will see the higher election term and step down.
Both nodes A & B will roll back their uncommitted entries and match the new leader's log.
Our log is now consistent across our cluster.
//Reference resources
https://www.jdon.com/artichect/raft.html
http://thesecretlivesofdata.com/raft/

Posted by VenusJ on Wed, 23 Jan 2019 06:06:15 -0800