Increase disk size of nodes in a running YugaByte DB cluster

#1

Posting this as it may be generally useful, one of the community users just sent this in. They deployed a YugaByte DB cluster across many nodes on AWS, and wanted to change the disk size after the fact. These were EBS disks. Here is how this can be done, enjoy!

Step A: Change volume size on AWS

  1. I opened the “instance” page of AWS’s ec2 console and selected each YB node.

  2. On the bottom pane to the right, there are listed volumes, that are attached to this instance. Take a look at the volumes. /dev/sda is going to be root, the attached volumes are going to be named something else, like /dev/xvdX.

  3. Click the volume (a gray summary of the volume is going to open). In the gray summary, click the volume’s EBS ID (it is going to open the volume page). On the volume page, click the “Actions” button and then “Modify”.

  4. It is going to open a popup form, on that form type in new size and click the “modify” button below.

Step B: Resize the fs on each node

  1. Login to the instance. From now on we are following Amazon’s instructions here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

  2. Inside linux, volumes are going to be named something like /dev/nvme0n1. Discover which are our volumes using fdisk or lsblk or something like that.

  3. In some cases, you need to grow partition manually (this was not the case with our volumes yesterday), so run command growpart /dev/nvme1n1 1 (i.e. first partition on that volume)

  4. Last thing, file system needs to be resized, so use the tools for the filesystem you are using. in our case the tool was xfs_growfs, but for ext filesystems it might be resize2fs.

  5. verify that all is well by running something like df -h

0 Likes