Multi-DC replication

I had a question about your multi-DC replication:

  • Is YugaByte’s multi-DC replication active / active (in the sense that reads can go to replica or master, of course with strict consistency for master reads and timeline consistency for replica reads)?
  • If so, what happens to writes from replica DC? Do they get forwarded to the primary DC to maintain strict consistency of writes?
  • If they do get forwarded, how does YugaByte provide read-after-write consistency?

Hi @ramesh.chandra,

Please see the answers to your questions inline.

Is YugaByte’s multi-DC replication active / active (in the sense that reads can go to replica or master, of course with strict consistency for master reads and timeline consistency for replica reads)?

You are correct, it is active/active. By default, all reads are routed to the tablet leader (the master for that key), with the ability to read from the followers (or replicas for that key). Read about what a tablet is here and what the tablet leader/followers are here.

If so, what happens to writes from replica DC? Do they get forwarded to the primary DC to maintain strict consistency of writes?

If they do get forwarded, how does YugaByte provide read-after-write consistency?

Yes, they get forwarded. Here is how the tunable read consistency levels work:

  • Strongly consistent reads (from the tablet leaders) provide read your own writes consistency.

  • Timeline consistent reads (from tablet followers) do not provide read your own writes consistency, but provide the guarantee that the updates are made readable in the order in which they were applied on the leader. This is different from eventual consistency (please see this blog article for more details)

We plan to support “bounded staleness” - where each replica lags behind the leader by a bounded time lag or a bounded number of edits. In this case, the application would have to perform reads from the leader for the bounded staleness interval after which it can read from the followers.

Great question, thanks for asking!

1 Like