Global Table like Cockroachdb

We need to create global table for our yugabyte cluster same like cockroachdb, cluster is configured with RF=5 for 3 region 9 nodes cluster.
Is it possible to set “num_replicas”: 9 while defining tablespace for global table for a cluster where RF=5 ?

Sure!

Say I have this cluster:

yugabyte=# SELECT host, cloud, region, zone FROM yb_servers() ORDER BY host;
   host    |      cloud       | region  |  zone
-----------+------------------+---------+--------
 127.0.0.1 | data_center_east | ailse01 | rack01
 127.0.0.2 | data_center_east | ailse02 | rack01
 127.0.0.3 | data_center_east | ailse03 | rack01
 127.0.0.4 | data_center_east | ailse04 | rack01
 127.0.0.5 | data_center_east | ailse05 | rack01
 127.0.0.6 | data_center_east | ailse01 | rack02
 127.0.0.7 | data_center_east | ailse02 | rack02
 127.0.0.8 | data_center_east | ailse03 | rack02
 127.0.0.9 | data_center_east | ailse04 | rack02
(9 rows)

With an RF 5:

yugabyte=# \! lynx --dump http://127.0.0.1:7000/cluster-config | awk '/version:/,/universe_uuid:/' | grep " num_replicas:"
    num_replicas: 5

I can create an RF 9 tablespace and table that uses it:

yugabyte=# CREATE TABLESPACE global_ts WITH (
yugabyte(#   replica_placement='{"num_replicas": 9, "placement_blocks":
yugabyte'#     [{"cloud":"data_center_east","region":"ailse01","zone":"rack01","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse02","zone":"rack01","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse03","zone":"rack01","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse04","zone":"rack01","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse05","zone":"rack01","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse01","zone":"rack02","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse02","zone":"rack02","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse03","zone":"rack02","min_num_replicas":1,"leader_preference":1},
yugabyte'#      {"cloud":"data_center_east","region":"ailse04","zone":"rack02","min_num_replicas":1,"leader_preference":1}]}'
yugabyte(#     );
CREATE TABLESPACE

yugabyte=# CREATE TABLE rf9_table(c1 INT PRIMARY KEY) TABLESPACE global_ts;;
CREATE TABLE

yugabyte=# SELECT tablet_id FROM yb_local_tablets WHERE table_name = 'rf9_table';
            tablet_id
----------------------------------
 f8684c68443e44d3863be323a27ead70
 a30a0f3234b64f11bfedbb695426d60e
 55eba04fa97d43e4b5bb2eef27bb1d24
 6604158d4d6847a494ca568bea1444bd
 2978480dd1634abfa7f9d9ba0164875a
 eaed19ffbc6141b5ac22db86f7b6c6e8
 77fc81623879414ca67f40e143be1d59
 772d0485cb0e4d739f286b335b436b63
 ac9c81de4bb341cdae29ef1638b9cd55
(9 rows)

thanks a lot Jim, so this is supported configuration for Production system?
Tested same of our QA box, its working.

@siddiqui

What’s the reason for higher replication at the tablespace level than the cluster’s default replication factor?

Is it for tables that you plan to use “follower reads” and intend to benefit from the higher number of followers for the table? Then that makes sense.

But from a resilience/fault tolerance point of view the cluster level default (RF=5 in your case) is what is used for the catalog information/meta-data about tables/views/procedures etc..

So a higher RF (RF=9) for the tablespace is not really buying you more resilience if the data for tables is at RF=9 but the meta-data about the tables is only at RF=5.

This is something to keep in mind.

@siddiqui – I was also going to echo @Kannan’s point: using a higher RF for a table doesn’t increase resilience if the overall cluster is already RF 5. A more common pattern is actually the opposite—creating a tablespace with a lower RF than the cluster to support use cases like Data Residency / GDPR compliance or Localized Reads / Latency Optimization.

thanks Kannan and Jim, our requirement is -
Reads must be up-to-date , this table is referenced by foreign keys and we still want to avoid intra-region round trips, so main aim is to avoid round trips at all,