Runtime error (yb/yql/pgwrapper/ /root/yugabyte/postgres/bin/initdb failed with exit code 1

I installed on 3 nodes master/t-server:,, and it works fine!
And now I wanna to install on more 3 nodes only t-servers:,,
It gives this error on t-server nodes:
Runtime error (yb/yql/pgwrapper/ /root/yugabyte/postgres/bin/initdb failed with exit code 1

Master I installed on first 3 nodes by:

–rpc_bind_addresses \ #36,37
–fs_data_dirs “/root/disk1,/root/disk2”
>& /root/disk1/yb-master.out &

And the tserver I installed on all nodes by:

–rpc_bind_addresses \ #36,37,38.39,40
–pgsql_proxy_bind_address \ #36,37,38.39,40
–cql_proxy_bind_address \ #36,37,38.39,40
–fs_data_dirs “/root/disk1,/root/disk2”
>& /root/disk1/yb-tserver.out &

How to configure first 3 nodes (35,36,37) to write and other 3 servers (38,39,40) only for read?

First, let’s break down the concepts.

The yb-master does not function as a traditional primary/standby or read-write/read-replica setup. Its role is to store the cluster metadata, such as the locations of the yb-tservers, the tablets they manage, and the SQL dictionary, which is similar to the PostgreSQL catalog. Think of it as a control plane, with typically one instance per failure zone in a Replication Factor 3 cluster.

On the other hand, the yb-tservers make up the data plane. They are responsible for storing portions of the database, including table rows, index entries, and transaction statuses. In a Replication Factor 3 cluster, each tablet has three peers distributed across yb-tservers, ensuring fault tolerance across failure zones. For a RF3 cluster, you need at least 3 yb-tservers, with each holding a copy of the entire database. Additional yb-tservers can be added in all failure zones, each holding a portion of the database.

In your example, the ‘yb-master’ command is accurate, but in the ‘yb-tservers’ command, you should only mention the masters: –tserver_master_addrs,, This will distribute the database across 6 nodes, and they know the whole configuration though the yb-masters.

You can connect to any of these 6 ‘yb-tservers’, access the entire database, and execute read and write queries. Given its RF3 nature, the system remains resilient if one node goes down. If you have 3 failure zones, you can specify each ‘yb-master’ and ‘yb-tserver’ location to YugabyteDB using --placement_cloud, --placement_region, and --placement_zone. This ensures the distribution of replicas across different zones, making the system tolerant to onezone failure.

A cluster like this is termed a YugabyteDB universe. It’s possible to add another read-only cluster to a YugabyteDB universe—sharing the same ‘yb-master’ but with an additional data copy that does not participate in the write quorum and then behaves like asynchronous replication. In this read-only cluster, reads may be a few seconds stale, so it cannot be used for writes or failover. However, it can help minimize latency for reporting queries in remote regions. Given that your IP addresses are in the same network, you may not need read replicas.

Thank you @FranckPachot!

Can I install on a new nodes just yb-tserver without installing yb-master and integrate it to the existing cluster consisting of 3 nodes, thereby divide Tablet-Peers, For examle now I have 12 Tablet-Peers (on 3 nodes), 4 for each node, can I add more 3 just yb-tservers to devide 12/6, to reach 2 Tablet-Peers for each node, having 3 yb-masters and 6 yb-tserver ? or yb-tserver not run without yb-master?
When I run on 4,5 and 6 nodes:

–rpc_bind_addresses \ #39,40
–pgsql_proxy_bind_address \ #39,40
–cql_proxy_bind_address \ #39,40
–fs_data_dirs “/root/disk1,/root/disk2”
>& /root/disk1/yb-tserver.out &

I got this error:

Running …
F0630 17:58:57.626783 25315] Runtime error (yb/yql/pgwrapper/ /root/yugabyte- failed with exit code 1


Please read Deployment checklist for YugabyteDB clusters | YugabyteDB Docs

The number of YB-Master servers running in a cluster should match RF.
The number of YB-TServer servers running in the cluster should not be less than the RF.