Yugabyte multi site cluster

Hi

to use multi site feature of yugabyte whether enterprise edition is necessary or it can work with community edition?

Regards

Rajesh Sarda

Hi Rajesh,

You can setup multi-node clusters with CE (community edition) just fine.

Please see OS requirement and guidelines for setting up a multi-node cluster.

Deploy | YugabyteDB Docs

If you have any questions, please don’t hesitate to ask.

regards
Kannan

Hi

Thanks for prompt response, apologies for not providing complete details.

we want to install yugabyte across 3 data centers & wanted to confirm whether this is supported in CE?

If yes is there any webconsole available for administration?

Regards

Rajesh Sarda

Hi @rajesh.sarda:

Yes, with CE you can setup YugaByte across multiple data centers, and the replication of data will be done in a DC aware manner. So for example, if you have a 12 node cluster in three data centers (4 nodes each), and if you are using a replication factor of 3, then YugaByte DB makes sure that each piece of data is replicated three ways in a DC aware manner (i.e. each of the copies is in a distinct data center).

In the CE version, each node has a simple UI for exploring basic cluster state, tables and metrics. The metrics can be exported to a metrics system (such as prometheus) with ease.

Are you trying to get this setup in a public cloud enviroment (such as AWS, Azure or GCP) or in a on-prem data center?

btw: you can also instant message us on https://gitter.im/YugaByte/Lobby if you run into an any issues when trying to install CE in a multi-node/multi-cluster setup.

regards,
Kannan

Great

  • We are trying to install in a onprim data center

  • Can you provide help doc where process is mention, we are not clear from document we have

Regards

Rajesh Sarda

Hi @rajesh.sarda,

We do not seem to have it in the docs yet, thanks for pointing out - will add it. In the meanwhile, here are some steps to get you going. The following example mimics the setup on a single node - we are going to deploy a cluster across the following ip addresses:

  • 127.0.0.1 in cloud=mycompany dc=dc1 rack=rack1
  • 127.0.0.2 in cloud=mycompany dc=dc2 rack=rack2
  • 127.0.0.3 in cloud=mycompany dc=dc3 rack=rack3

Pre-requisites to try on a single node

# Delete any existing data.
rm -rf /tmp/yugabyte-local-cluster/

# Creat the various directories.
mkdir -p /tmp/yugabyte-local-cluster/node-1/disk-1
mkdir -p /tmp/yugabyte-local-cluster/node-2/disk-1
mkdir -p /tmp/yugabyte-local-cluster/node-3/disk-1

Start masters

Start master #1 in dc1, rack1 by running the following command. The last 3 parameters in the command below inform the master of its placement.

./bin/yb-master \
    --fs_data_dirs /tmp/yugabyte-local-cluster/node-1/disk-1 \
    --webserver_interface 127.0.0.1 \
    --rpc_bind_addresses 127.0.0.1 \
    --replication_factor=3 \
    --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \
    --placement_cloud=mycompany \
    --placement_region=dc1 \
    --placement_zone=rack1 &

Start master #2 in dc2, rack2 by running the following.

./bin/yb-master \
    --fs_data_dirs /tmp/yugabyte-local-cluster/node-2/disk-1 \
    --webserver_interface 127.0.0.2 \
    --rpc_bind_addresses 127.0.0.2 \
    --replication_factor=3 \
    --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \
    --placement_cloud=mycompany \
    --placement_region=dc2 \
    --placement_zone=rack2 &

Start master #3 in dc3, rack3:

./bin/yb-master \
    --fs_data_dirs /tmp/yugabyte-local-cluster/node-3/disk-1 \
    --webserver_interface 127.0.0.3 \
    --rpc_bind_addresses 127.0.0.3 \
    --replication_factor=3 \
    --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \
    --placement_cloud=mycompany \
    --placement_region=dc3 \
    --placement_zone=rack3 &

You should now be able to view the master UI at http://localhost:7000/ and it should show the correct placements for the masters.

Set the placement config

Next we tell the master to place replicas of any piece of data across the different dcs. We can do this by running the following:

./bin/yb-admin --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 modify_placement_info mycompany.dc1.rack1,mycompany.dc2.rack2,mycompany.dc3.rack3 3

You can view the updated config in this page http://localhost:7000/cluster-config. It should look like the following:

replication_info {
  live_replicas {
    num_replicas: 3
    placement_blocks {
      cloud_info {
        placement_cloud: "mycompany"
        placement_region: "dc1"
        placement_zone: "rack1"
      }
      min_num_replicas: 1
    }
    placement_blocks {
      cloud_info {
        placement_cloud: "mycompany"
        placement_region: "dc2"
        placement_zone: "rack2"
      }
      min_num_replicas: 1
    }
    placement_blocks {
      cloud_info {
        placement_cloud: "mycompany"
        placement_region: "dc3"
        placement_zone: "rack3"
      }
      min_num_replicas: 1
    }
  }
}

Start tablet servers

Next start 3 tablet servers in the appropriate placements. Again the last 3 parameters determine the location.

./bin/yb-tserver --fs_data_dirs /tmp/yugabyte-local-cluster/node-1/disk-1 --webserver_interface 127.0.0.1 --rpc_bind_addresses 127.0.0.1 --tserver_master_addrs 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 --memory_limit_hard_bytes 1073741824 --redis_proxy_bind_address 127.0.0.1 --cql_proxy_bind_address 127.0.0.1 --pgsql_proxy_bind_address 127.0.0.1 --local_ip_for_outbound_sockets 127.0.0.1 --placement_cloud=mycompany --placement_region=dc1 --placement_zone=rack1 &

./bin/yb-tserver --fs_data_dirs /tmp/yugabyte-local-cluster/node-2/disk-1 --webserver_interface 127.0.0.2 --rpc_bind_addresses 127.0.0.2 --tserver_master_addrs 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 --memory_limit_hard_bytes 1073741824 --redis_proxy_bind_address 127.0.0.2 --cql_proxy_bind_address 127.0.0.2 --pgsql_proxy_bind_address 127.0.0.2 --local_ip_for_outbound_sockets 127.0.0.2 --placement_cloud=mycompany --placement_region=dc2 --placement_zone=rack2 &

./bin/yb-tserver --fs_data_dirs /tmp/yugabyte-local-cluster/node-3/disk-1 --webserver_interface 127.0.0.3 --rpc_bind_addresses 127.0.0.3 --tserver_master_addrs 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 --memory_limit_hard_bytes 1073741824 --redis_proxy_bind_address 127.0.0.3 --cql_proxy_bind_address 127.0.0.3 --pgsql_proxy_bind_address 127.0.0.3 --local_ip_for_outbound_sockets 127.0.0.3 --placement_cloud=mycompany --placement_region=dc3 --placement_zone=rack3 &

You should now be able to see this in the UI on the tablet servers page here http://localhost:7000/tablet-servers

At this point you should be good to go. The multi-node setup is similar - you just need to replace the appropriate IP addresses. Please reach out if you have any issues. I will add these to the docs soon with the various screenshots.

1 Like

Hi @rajesh.sarda:

So the steps above outline the gist of things to create a zone/dc/cloud aware multi-node cluster.

Now, to create a multi-node cluster, you can use the steps documented here, but keeping in mind the following additional key points from Karthik’s post above:

  1. When you start any yb-master or a yb-tserver, you must let the system know which zone/dc/cloud that process is running in using the additional command line options:

–placement_cloud= --placement_region= --placement_zone=

  1. Suppose you want to create a 12 node cluster across 3 DCs with replication factor or 3.
  • All 12 nodes will run a yb-tserver process (4 nodes per DC).
  • Three of the those nodes (1 per DC) should also run a yb-master process.
  1. When the three master processes are started for the first time, the cluster is created. After this step you must run the yb-admin command to tell the cluster what kind of placement policy you want. The following command indicates that we want one copy each in mycompany.dc1.rack1, mycompany.dc2.rack2, and mycompany.dc3.rack3 zones.

./bin/yb-admin --master_addresses <master1>:7100,<master2>:7100,<master3>:7100 modify_placement_info mycompany.dc1.rack1,mycompany.dc2.rack2,mycompany.dc3.rack3 3

Thank you

We are able to setup cluster across Datacenter.

Regards

2 Likes

Glad to hear. Thanks…