CPU utilisation high above 90%

While dropping or creating Index on column and DDL operation on DB, the CPU utilisation is above 90% and tmaster went to 400%.
Due to this, we are facing connection closed issues.
Kindly suggest your thoughts.

Hello @Ruban

What is the hardware on tserver & master ? (what cpu-name & how many vcpus)

How many tablets are on the dropped index?

When creating a new index, you need to build it in the background for all the rows.

While dropping an index shouldn’t be that heavy since it’s just deleting files and updating metadata.

Hi Dorian,
I’m using single node yugabyteDB.
please find the hardware details below:
Model name: Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz
32GB RAM
diskspace 200GB (used 35 GB)
8 CPU core.
Please find the top command details below after dropping index:
top - 13:48:59 up 187 days, 10:00, 2 users, load average: 10.89, 4.01, 3.04
Tasks: 406 total, 1 running, 404 sleeping, 1 stopped, 0 zombie
%Cpu0 : 98.7 us, 1.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 99.3 us, 0.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 99.0 us, 1.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 98.7 us, 1.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu4 : 99.7 us, 0.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu5 : 99.7 us, 0.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu6 : 99.3 us, 0.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu7 : 99.3 us, 0.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32761424 total, 14023560 free, 17473704 used, 1264160 buff/cache
KiB Swap: 4194300 total, 86716 free, 4107584 used. 14877256 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
46124 root 20 0 3552260 910260 6328 S 770.9 2.8 503:30.64 yb-master
46187 root 20 0 159.6g 3.8g 6696 S 18.9 12.2 3066:45 yb-tserver

What version of yugabytedb are you using?

What type of disk is it?


Is there a way for us to replicate this? Like getting the schema, fill with test data, and drop the index.

Disk type:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
└─OSVG-datalv 253:6 0 175G 0 lvm /data

in local server, we are using yugabyte-2.16.2.0.
but in the prod we are using 2024.1.0. (in prod also, we are facing this connection issue while creating day wise two partition table with some indexes by using sh script and this sh script is scheduled in cron)

I don’t understand this. What type of disk is it? HDD or ssd/nvme?

2.16 is end of life, see: YugabyteDB releases | YugabyteDB Docs

Please use a newer release even locally?

Please open another thread with as much details as possible if this is a separate issue/error from this thread.

Sure Dorian. But we were using postgres for past 4 years. We did not face issues like this. We are facing this kind of issues since we moved to YugabyteDB. The postgres is working fine with minimum config compare to this yugabyte.

The storage / sharding / replication differs from PostgreSQL to support the extra features we add. This results in some cases of higher overhead compared to single-node. But it still shouldn’t happen, especially with dropping index.