YugabyteDB cluster on different physical servers with Docker Compose

Hello,

I have a setup with multiple servers that each will run a YugabyteDB node to form a multi-site cluster. Each node will run in a docker container, and will be managed by docker compose.

One node will be master and the other nodes will join that master. Let’s assume two nodes for simplicity:

  • Node 1: This container will run on a host with IP: 192.168.120.243 and will be the master. The container will get the IP: 172.18.0.2.
  • Node 2: This container will run on a host with IP: 192.168.120.244 and intends to join the master. This would create a 2 node cluster. The container will also get the IP: 172.18.0.2, but will, as mentioned, run on a different host.

Here is my docker compose file for node 1 to run on the host with IP 192.168.120.243:

name: my_project

services:

  db:
    image: yugabytedb/yugabyte:2025.1.0.1-b3
    container_name: my_container_name_1
    hostname: my_hostname_1
    restart: always
    command: [ "bin/yugabyted",
               "start",
               "--background=false",
               "--advertise_address=my_container_name_1",
               "--cloud_location=my_cloud.my_region_1.my_zone_1" ]
    ports:
      - 7000:7000
      - 7100:7100
      - 9000:9000
      - 9100:9100
      - 15433:15433
      - 5433:5433
      - 9042:9042

and here it is for node 2 to run on the host with IP 192.168.120.244:

name: my_project

services:

  db:
    image: yugabytedb/yugabyte:2025.1.0.1-b3
    container_name: my_container_name_2
    hostname: my_hostname_2
    restart: always
    command: [ "bin/yugabyted",
               "start",
               "--background=false",
               "--advertise_address=my_container_name_2",
               "--join=192.168.120.243",
               "--cloud_location=my_cloud.my_region_2.my_zone_2" ]
    ports:
      - 7000:7000
      - 7100:7100
      - 9000:9000
      - 9100:9100
      - 15433:15433
      - 5433:5433
      - 9042:9042

The networks created for each docker project get the name “my_project_default” (same for both).

If I run “docker compose up -d” on the host with IP: 192.168.120.243, the node comes up fine and becomes master of a new cluster.

If I now go to the host with IP: 192.168.120.244 and run “docker compose up -d”, the container fails to start. In the docker logs one finds a message similar to:

ERROR: Master node present at my_container_name_1:7000 is not reachable.

The above setup works fine when the docker containers are started on the same host, but as soon as the containers run on different hosts, there seems to be a communication problem.

I have deactivated the firewalls on both servers, so there is nothing blocking the communication.

It does seem that the “join” communication reaches node 1 as the “my_container_name_1“ name is correctly identified. However, it fails to bind to the master.

My suspicion is that node 2 tries to find “my_container_name_1:7000“ on IP: 192.168.120.244 instead of IP: 192.168.120.243, but that will fail as IP: 192.168.120.244 has no such container.

Some question:

  • How can I resolve this issue?
  • Do I need to do some network configuration in docker compose to “expand“ the docker network?
  • Can I configure YubabyteDB somehow to make it direct the incoming join request to the correct container?

I have searched almost all available documentation online for days, but both the official documentation and most other use cases only consider docker containers running on one-and-the-same host.

I am only interested in using the free version of YugabyteDB. The two servers in my example above (192.168.120.243 and 192.168.120.244) are run “on-prem”. We are not using and 3rd party cloud providers.

Thank you!

Hi @pzonet

Just be aware that 3 nodes is the minimum for deployment in production, see: Deployment checklist for YugabyteDB clusters | YugabyteDB Docs.

I think you are missing some ports? See 7100 & 9100 that are needed Default ports reference | YugabyteDB Docs

Hi @dorian_yugabyte ,

Sorry, I did not add them in my post. They are mapped in my real compose files, but I missed adding them when I simplified my question for the forum.

I edited my original post with the missing port mappings.

Hi @dorian_yugabyte

I checked the yugabyted.log file. This is what is says from the joining node:

[yugabyted start] 2025-09-26 17:30:03,269 INFO:  | 0.0s | Running yugabyted command: 'bin/yugabyted start --background=false --advertise_address=my_container_name_2 --join=192.168.120.243 --cloud_location=my_cloud.my_region_2.my_zone_2'
[yugabyted start] 2025-09-26 17:30:03,269 INFO:  | 0.0s | cmd = start using config file: /root/var/conf/yugabyted.conf
[yugabyted start] 2025-09-26 17:30:03,269 INFO:  | 0.0s | Found directory /home/yugabyte/bin for file openssl_proxy.sh
[yugabyted start] 2025-09-26 17:30:03,269 INFO:  | 0.0s | Found directory /home/yugabyte/bin for file yb-admin
[yugabyted start] 2025-09-26 17:30:03,269 INFO:  | 0.0s | Found directory /home/yugabyte/bin for file yb-ts-cli
[yugabyted start] 2025-09-26 17:30:03,269 INFO:  | 0.0s | Found directory /home/yugabyte/postgres/bin for file pg_upgrade
[yugabyted start] 2025-09-26 17:30:03,270 INFO:  | 0.0s | Fetching configs from join IP...
[yugabyted start] 2025-09-26 17:30:03,270 INFO:  | 0.0s | Trying to get masters information from http://192.168.120.243:9000/api/v1/masters (Timeout=60)
[yugabyted start] 2025-09-26 17:30:03,273 DEBUG:  | 0.0s | Tserver 192.168.120.243 returned the followingmaster leader my_container_name_1.
[yugabyted start] 2025-09-26 17:30:03,282 INFO:  | 0.0s | HTTP Error occured while hitting the api endpoint http://my_container_name_1:7000/api/v1/tablet-servers: <urlopen error [Errno -2] Name or service not known>
[yugabyted start] 2025-09-26 17:30:03,282 ERROR:  | 0.0s | ERROR: Master node present at my_container_name_1:7000 is not reachable.

So it does return the correct “followingmaster leader”, but fails to reach it at port 7000 in the following step.

On the master node, the services are up and running fine:

+------------------------------------------------------------------------------------------------------------------+
|                                               yugabyted                                                          |
+------------------------------------------------------------------------------------------------------------------+
| Status              : Running.                                                                                   |
| YSQL Status         : Ready                                                                                      |
| Replication Factor  : 1                                                                                          |
| YugabyteDB UI       : http://my_container_name_1:15433                                                           |
| JDBC                : jdbc:postgresql://my_container_name_1:5433/yugabyte?user=yugabyte&password=yugabyte        |
| YSQL                : bin/ysqlsh -h my_container_name_1  -U yugabyte -d yugabyte                                 |
| YCQL                : bin/ycqlsh my_container_name_1 9042 -u cassandra                                           |
| Data Dir            : /root/var/data                                                                             |
| Log Dir             : /root/var/logs                                                                             |
| Universe UUID       : 669ba1eb-c5ea-4794-90e5-a51895174866                                                       |
+------------------------------------------------------------------------------------------------------------------+

Any clue what is wrong?

@pzonet

In your setup, containers are being brought up on the bridge network of separate VMs. There is no way for containers to communicate between each other.

You will first need to create a docker setup where docker daemon on different nodes can communicate with each other. This can be achieved either using a overlay network or ipvlan network.

Please refer to the docker network documentation for setting up a the docker overlay network - Overlay network driver | Docker Docs this is good for dev purposes. I think you will need to setup docker ipvlan network for running any prod or perf workloads.

Thanks,
Nikhil

Hi @nmalladi , @dorian_yugabyte

Thanks a lot for this tip!

I have now setup two nodes and used the overlay driver instead of the bridge driver between them. In my example, I am aiming for two stand-alone containers in swarm mode (I am not using swarm services). This means I have run docker swarm init on the master node, followed by a join from the follower. As soon as I have gotten this to work, I will add a third node.

The overlay network is on a 10.0.1.0/24 subnet, which is fine. The two nodes got IPs of 10.0.1.2 and 10.0.1.3. I managed to start up a yugabyte container on both nodes and created a cluster.

Now, only one thing remains. And it is something I have not been able to solve by my self…

It seems that the overlay network refuses any connection to it, except for the connection between the nodes. There are two connections to the nodes I need:

  • I want to be able to access the YubabyteDB UI on port 15433 and also the dashboards on ports 7000 and 9000 from each of the hosts.
  • I want to run an additional container on each of the hosts that will speak with the yugabyte containers in order to interact with the databases. This is done on port 5433.

However, I have not managed to to this. With every attempt, the access into the overlay network from either the host or another container seems impossible. Only access from nodes within the overlay network has been successful.

If we only look from the perspective of host 1 (193.168.120.243). Here is the compose file:

name: my_project

networks:
                  
  internal_network:
    name: internal_network
    driver: bridge

  external_network:
    name: external_network
    driver: overlay
    attachable: true

services:

  app:
    image: app-image:1.0
    container_name: my_app
    hostname: host_name
    networks:
      - internal_network
    restart: always
    environment:
      DATABASE_URL: postgresql://my_user:123my_password@db:5433/my_db
    ports:
      - 80:8000
    depends_on:
      - db

  db:
    image: yugabytedb/yugabyte:2025.1.0.1-b3
    container_name: my_container_name_1
    hostname: my_hostname_1
    networks:
      - internal_network
      - external_network
    restart: always
    command: [ "bin/yugabyted",
               "start",
               "--background=false",
               "--advertise_address=my_container_name_1",
               "--cloud_location=my_cloud.my_region_1.my_zone_1" ]
    environment:
      POSTGRES_DB: my_db
      POSTGRES_USER: my_user
      POSTGRES_PASSWORD: 123my_password
    ports:
      - 7000:7000
      - 7100:7100
      - 9000:9000
      - 9100:9100
      - 15433:15433
      - 5433:5433
      - 9042:9042

Here, the internal_network ends up on the 172.19.0.0/16 subnet, which the yugabyte container is also a part of, except for the external overlay network. As you can see, a lot of ports are open to the db service.

The issue now is that the app service fails to communicate with the db service at port 5433:

ERROR    | app.database.session:<module>:36 - Failed attempt to connect to the database: (psycopg2.OperationalError) connection to server at "db" (172.19.0.2), port 5433 failed: Connection refused

In addition, the YugabyteDB UI is unreachable from the host:

curl -v 192.168.120.243:15433
*   Trying 192.168.120.243:15433...
* connect to 192.168.120.243 port 15433 failed: Connection refused
* Failed to connect to 192.168.120.243 port 15433: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.120.243 port 15433: Connection refused

However, if I log into the running yugabyte container and run curl -v 10.0.1.2:15433, it can access the UI with no issues.

My questions:

  • Why can I not access the overlay network from outside the overlay network? Should not the ports section in the compose file open them up from both the host as well as other containers?
  • How can I fix this? It feels like I am super close.

@pzonet

I think the issue is that YugabyteDB container is binding only on overlay network, so requests to the IP 10.0.1.2 is working. However, container is not listening to the ip-address associated with bridge network 172.19.0.2.

I believe, port forwarding on the container works only with bridge network. Any requests to 192.168.120.243:15433 will get forwarded to 172.19.0.2:15433, however container is not listening on the IP so the request fails.

I think we will have to support yugabyted to bind to both bridge and overlay network interfaces, currently yugabytedb supports listening only on one network, which is the overlay network.

We generally don’t test the docker swarm way of deploying the YugabyteDB database, as we consider docker deployments for development only.

I did a quick research, it sounds like we should be able to have both the app and yugabytedb listen on overlay network using docker stack deploy. I don’t have relevant experience with this deployment method. I think you can see if this approach works.

I’ll try to get this in the backlog of the yugabyted work stream.

Thanks,
Nikhil

1 Like

Hi @nmalladi ,

Thank you for your reply, I am still struggling with this, so all effort to get a fix is much appreciated!

I have read your analysis and I agree 100%! It actually all comes down to what we specify as advertise_address.

If I set advertise_address=my_container_name_1 or advertise_address=my_container_name_1.external_network it will listen to the extenal network (the overlay network). If I do this, the cluster nodes can communicate and sync. However, the app container and the host fails to reach the yugabyte container, as specified before.

If I set advertise_address=my_container_name_1.internal_network, it will listen to the internal network. If I do this, the app container and the host can reach the yugabyte container, but the cluster nodes fails to reach each other.

And, just as you say, this has to do with the yugabyte container and that it can only listen to one network at a time.

I think we will have to support yugabyted to bind to both bridge and overlay network interfaces, currently yugabytedb supports listening only on one network, which is the overlay network.

Are you involved in the development of yugabyte or have some pull with the developers? Since I wrote my last reply in this thread, I have created another thread (with no replies) where I have boiled down the issue to an example that is easy to reproduce. You can find it here:

In addition to this (as it almost seems like a bug, or at least a limitation), I have also created an issue at the yugabyte github page:

I believe that many users may be interested in a solution to this issue.

We generally don’t test the docker swarm way of deploying the YugabyteDB database, as we consider docker deployments for development only.

Well, docker is well established in production environments, so I do not understand what would be different with yugabyte in the mix. In my case, I come from a docker production setup with postgres and am looking for a migration over to yugabyte to get its remote synchronization features. We do not see working outside docker containers as an alternative. Also, kubernetes is currently not an option either. Docker swarm seemed as the perfect solution.

I’ll try to get this in the backlog of the yugabyted work stream.

That would be great! If we only could get the yugabyted container to be able to listen to two networks at the same time, it would probably resolve the issue. For example, if we could specify two (or more) networks in the advertise_address property.

Hi again @nmalladi ,

I just wanted to check if you have any updates? Do you you know the current status and/or if we are waiting on something? Do you believe it is possible to get a fix in an upcoming version of YugabyteDB?

Thanks,

Hi @nmalladi ,

I was wondering if there is any update to Yugabyte with this feature planned? Let me know if there is any page where I can follow the work.

Currently, this is preventing us to move forward with our Yugabyte project.

Thanks you!