Will the yb-master be a performance bottleneck?

Hi.
In a yugabyteDB instance, very often, the user’s read and write requests need to get where the data is distributed through the yb-master, and then read and write. So, for the whole cluster, a lot of requests are requested to this yb-master, so will it become a hot node? Will it become a performance bottleneck for the whole cluster?

Hi @ZhenNan2016

Most metadata is cached in yb-tservers. And yb-master often pushes changes to yb-tserver so there’s less polling.

If it becomes a bottleneck it will be considered a bug and it will be optimized.

@ZhenNan2016: In YB 2024.1 you can even tell YB which catalog tables you’d like to cache on the TServers via several new catalog gFlags! Note this is an early access feature, but I found it incredibly useful on a recent POC to help reduce query planning time when a query is first executed.

Hi @dorian_yugabyte
If a cluster has three nodes A, B, and C, and the request goes to node A, and the data is fetched from node A and stored on node B, then it doesn’t need to go through yb-master, but is requested directly from node A to node B, right?

Yes. It will have already cached tablet locations.

okay,thanks.

By the way, I see the comment about UserFrontier from the source code, as follow:
“DocDB implementation of RocksDB UserFrontier. Contains an op id and a hbmrid time. The difference between this and user boundary values is that here hbmrid time is taken from committed Raft log entries, whereas user boundary values extract hbmrid time from keys in a memtable. This is important for transactions, because boundary values would have the commit time of a transaction, but e.g. “apply intent” Raft log entries will have a later hbmrid time, which would be reflected here.”
However, I don’t really understand what it means. Is there any documentation on the interpretation of UserFrontier?

@Jim_Knicely
Excellent. Thank you very much.

Hi @dorian_yugabyte
Is there any documentation on the interpretation of UserFrontier?
Thank you very much.