How to clean delete a tserver


We’ve created a three nodes Yubabyte DB with one Tmaster and one Tserver on node 1 and oneTserver by node 2 and 3.

We start master with that command :
yb-master --master_addresses= --rpc_bind_addresses= --fs_data_dirs=/var/yb_data/master --webserver_port=8080 --replication_factor=1

And each Terver like this
yb-tserver --tserver_master_addrs= --rpc_bind_addresses= --pgsql_proxy_bind_address= --fs_data_dirs=/var/yb_data/tserver

Last week we add one Tserver on another node but afterwards we stopped it by killng process.

Thereafter the database has been in a stalled state with I/O errors in log.

Is there any command that could neatly stop and delete a specific Tserver ?

Thanks for help

Hi @weuw

Welcome to YugabyteDB Forum!

See docs on how to replace an yb-tserver. In this case you just do the REMOVE part.

Technical Support Engineer

Thanks for quick answer.

When you say the remove part, you mean that command
~/master/bin/yb-admin -master_addresses $MASTERS change_blacklist REMOVE node1:9100

OR you mean blacklist it
~/master/bin/yb-admin -master_addresses $MASTERS change_blacklist ADD $OLD_IP:9100

Hi @weuw

Reverse, you add to blacklist and the last step is removing from blacklist. Just like steps on the page:

  1. Add to blacklist
  2. Wait for rebalance
  3. Kill the server
  4. Remove from blacklist

You also need to add --replication_factor=1 when you start the yb-tserver.
But why are you using RF1, you will have data loss & unavailability when a server goes down.

What are you trying to achieve ?
Do you need help with cluster setup and/or schema ?
I can help here or on slack or over a call.

Thanks for your help, actually we want to compare Maria DB performance vs YugabyteDB in our application where some batches are very long to execute.

With this in mind we’ll vary number of yugabytedb nodes.

As it is only for test replication factor is not really a problem for now.

We keep in touch :slight_smile:

I think it’s best to compare in production scenarios otherwise you’ll be comparing apples to oranges. A case would be the replication factor. Or database schema when sharding.

Always here to help!

Hi Dorian

Actually our tests showed very poor performance in our 3 nodes cluster, compared to MariaDb (which is 10 times better).

The only problem I see is remote HDD drives used, is there a special tuning in Yugabyte parameters we can try to improve performances ? Or the only solution is to get local SSD ?


Hi @weuw

Sorry, I somehow missed this.

You are comparing apples to oranges. Example:

Did you configure MariaDB with synchronous replication and 3 replicas ?
Can you try with 6 nodes ? (hint: mariadb won’t horizontally scale).

SSD are required, always.

This depends on the workload (multi region etc) and the application. Often we have to tweak table schema,indexes,queries etc to make it more scalable.

So can you specify your application & your requirements to see how we can best use the database.