Deploying without static IPs / hostnames

I’m trying to deploy Yugabyte to Fly.io, which doesn’t have fixed instances, and has no way to address a specific instance (no static IPs, no fixed hostnames). I have a DNS entry that points to all of my Yugabyte master instances (yb-masters.internal), but that’s it.

What do I need to enter for --master_addresses? I’ve tried entering yb-masters.internal but the masters can’t talk with each other.

I also need to set --rpc_bind_addresses to IPv6 ::1 (Fly.io’s internal networking is IPv6), and I can’t tell if I’m getting the syntax for that right until I figure out how to get the masters to talk to each other.

Using this command:

/home/yugabyte/bin/yb-master --fs_data_dirs=/data/master \
	--master_addresses=yb-masters.internal:7100 \
	--replication_factor=3 --logtostderr

I get the error:

Error getting permanent uuid from config peer [yb-masters.internal:7100]: Network error
(yb/util/net/socket.cc:540): recvmsg error: Connection refused (system error 111)

If I instead use this command:

/home/yugabyte/bin/yb-master --fs_data_dirs=/data/master \
	--master_addresses=yb-masters.internal:7100 \
	--rpc_bind_addresses=localhost6:7100 \
	--replication_factor=3 --logtostderr

I get this error:

Unable to init master catalog manager: Illegal state (yb/master/catalog_manager.cc:1644):
Unable to initialize catalog manager: Failed to initialize sys tables async:
None of the local addresses are present in master_addresses yb-masters.internal:7100

Hi @spiffytech

There really isn’t at least a fixed hostname for servers? How are you supposed to differentiate between them (say, in your custom code)? Can you ask the support staff? Maybe by using a script that queries the DNS on startup, getting all the entries, and using those to start the yb-master/tserver?

Hi @spiffytech how did you deploy on fly.io? If there is one DNS entry per application, I guess each yb-master should be one application. Or add some logic like they do with PostgreSQL (GitHub - fly-apps/postgres-ha: Postgres + Stolon for HA clusters as Fly apps.)
I’ll try to get info from fly.io people. I didn’t test YugabyteDB on fly.io but that’s something I wanted to do
Franck.

Fly.io doesn’t have a notion of “servers” - it’s just ephemeral containers, possibly with storage attached. (They encourage the “cattle, not pets” mantra so you can’t really get a persistent identity for server01, server02, etc. besides whatever’s stored on disk)

There’s a DNS entry that will give me the IPs of running instances, but it won’t have 3 IPs until all 3 yb-masters have begun booting, introducing a chicken-and-egg problem since I need the 3 IPs to boot the masters. I could be hacky and sleep on boot until all 3 instances had launched and been issued IPs.

I don’t think Fly guarantees that instance IPs are static; if I redeploy then my instances could get different IPs.

I made a minimal Dockerfile FROM yugabytedb/yugabyte, overriding the ENTRYPOINT with the above commands. I created three instances of the app, each with at persistent volume.

I’ll check out your suggestions.

Hi @spiffytech,

Thank you for trying YB! Would love to get it working with fly.io, I am adding some more folks here.

@spiffytech
I don’t think you can use a DNS resolving to multiple addresses like yb-masters.internal

I got it working by resolving the addresses like this once scaled to 3 nodes:

master_addresses=$(nslookup $FLY_APP_NAME.internal | grep -A1 "$FLY_APP_NAME.internal" | awk '/^Address:/{print "["$2"]:7100"}' | paste -sd,)
set | grep ^master_addresses`

Here is what I’ve done to deploy to 3 regions:

Take it as a draft, was my first trial of fly.io

Excellent, I’ll start working from there. Thanks!

Does Yugabyte assume the master/tserver IPs stay the same? I.e., if a master rebooted and came back on a different IP than before, would that cause problems?

Yes it would cause problems and require some yb-admin commands. [but see Karthik’s answer - not a problem id DNS name is used. So for fly.io, AFAIK, a restart keeps the IP and name of the VM, but a scale-down-scale-up changes both]
What I think would be the most stable deployment:

  • define 3 yb-master ad 3 apps so that they have their host name, and then you can refer to them as --master_addresses=yb-master-1.internal:7100,yb-master-2.internal:7100,yb-master-3.internal:7100
  • define yb-tserver apps with one per region. So that you don’t rely on the guess that fly.io will distribute them correctly. And those can then be scaled. But at least, you are sure that you assign 3 sets of master and servers to 3 regions. The only thing you have to take care then is scale each set of tserver to the same size. And don’t forget to define the placement when starting them (like --placement_zone="${FLY_REGION}")
    Because for the masters, you don’t want to scale them. Having 3 for a 3 regions cluster is ok. You want elasticity for tservers, but being sure that YugabyteDB knows in which region they are, to be resilient to a region failure

@spiffytech - no, the IPs can change (this is the way k8s works). In this case, you would need to use the server dns names rather than the ip address, and YugabyteDB will auto-refresh the ip. The identities of the various servers are stored as internal uuid’s, and the ip address becomes a dynamic attribute in this case (and the hostname a static attribute) - whereas normally, the ip address is treated as a static attribute.

1 Like

Note that “cattle” doesn’t work for stateful apps. Even when you’re Google. Even some databases that split compute & storage into separate tiers, the storage tier is “pet” and the compute tier is semi “pet/cattle”.

I brought this up in the Fly.io forum as well, just to be sure I wasn’t missing anything.

Fly does assign a permanent private IP that gets paired with each storage volume, I just don’t think that’s documented anywhere. And volumes are locked to the region they’re created in, so it sounds like the answer is spin up dummy containers so I can enumerate their private IPs, then spin up Yugabyte masters configured with those IPs.

Thanks for helping me work through this!

1 Like

Awesome. As the volume creates the node identity this is perfect solution.
Create 3 volumes, get the private IPs, then deploy masters with this list and scale to 3. Then create volumes for data, deploy and scale tservers with this list of masters.
For single region you could also use yugabyte which does the admin of adding to the list for each. Just needs to check if it is the first one or get one of the other IP to join. Could be possible from nslookup.
Thanks a lot. I’ll also test and document this for the community. Your feedback is great help

1 Like