Hi.
A cluster with replication factor (RF)=3 has three nodes, which are nodes A, B, and C.
When node A fails and cannot provide services, node D is added. At this point, the cluster has three nodes: B, C, and D.
Can the node A join the cluster After fault recovery ?If possible, how should the original data on node A be processed? Will it be rebalanced?
Hi,
When you add a new node, the tablet peers are rebalanced but the one on the node that is dow stay as followers. If the node joins back in 15 minutes (with default config) they will catch up the latest changes and continue (some may be elected leaders to balance the load). But after 15 minutes they are considered lost and new replicas are created on the existing nodes to restore the replication factor. If the node comes back, it will act as a new server
Thanks.
Has the original data on node A been cleaned up?
Yes, when it comes back after 15 minutes, it contacts the master, know that the data here is from deleted tablet peers and remove them
okay, I got it now. Thanks.
Hi @FranckPachot
Sorry to bother you.
I see the comment about UserFrontier from the source code, as follow:
“DocDB implementation of RocksDB UserFrontier. Contains an op id and a hbmrid time. The difference between this and user boundary values is that here hbmrid time is taken from committed Raft log entries, whereas user boundary values extract hbmrid time from keys in a memtable. This is important for transactions, because boundary values would have the commit time of a transaction, but e.g. “apply intent” Raft log entries will have a later hbmrid time, which would be reflected here.”
However, I don’t really understand what it means. Is there any documentation on the interpretation of UserFrontier?
Hi, I don’t think we have specific documentation about frontiers at DocDB level. It’s used in many features like TTL, CDC, Tablet splitting. What do you exactly want to know and why? Di you experience some issues?
Hi,thanks for your reply.
I was just looking at the open source code recently and saw something about frontiers, but didn’t quite understand what it did. There are frontiers in memtable, wal. may I ask, is it the divider that controls the data swipe disk in memtable, wal?
Yes that’s my understanding, to take into account the version history retention when flushing or compacting
okay, I got it now.
Thanks a lot.