Which one has better write performance between YugabyteDB cluster and single node

I had deployed a yugabytedb cluster with 3 node in kubernetes , meantime have a single node yugabytedb , my test result with Jmeter JDBC request is that when request select statement, cluster’s throughput is higher than single node about 50%, but when request update statement, it almost equals to single node, this kind of result is normal? Or did I with wrong way?

What is the cluster configuration (example: what’s the RF?) here?

What is the hardware configuration (vcpus,memory,hdd/ssd)?

Depends on what you exactly did and on which node you exactly connected.

I would expect it to be slower because of synchronous-replication overhead.
Example

You probably did it wrong.

In a nutshell, a single-node of PostgreSQL will be faster. The problem is that you can’t horizontally scale it, and yugabytedb has better replication semantics (multi-region, sharding, async, read-replicas, combination of all these).

On small nodes, PostgreSQL should also be faster.

It’s better to use already existing benchmarks tools like Benchmark YSQL performance using TPC-C | YugabyteDB Docs.

See also hardware recommendations for YugabyteDB Deployment checklist for YugabyteDB clusters | YugabyteDB Docs

the RF is 3.

vcpus=2, memory= one node 8G, another two 4G , with disk is hdd in all nodes.

The node I connected is always the one with 8G memory.
select & update on one table A with 250k+ rows records, select records when id start at 100, and increment is 100 , it means the select condition is id=100, 200, 300, …
update records specifically include 3 different requests which are add, update, delete on condition of id start at 260,001, and increment is 1.

Yes, it slower a little bit than single node.

So, no matter how many nodes I scale out, or scale up configuration of each node, the cluster’s write throughput would never higher than single node, right?

Note that HDD disks are strictly not supported Deployment checklist for YugabyteDB clusters | YugabyteDB Docs

Incorrect. With sharding and spreading writes, you’ll be able to linearly scale writes with number of nodes.
The easiest way is to do use a table like:

create table test(key biginteger primary key, value text);

See DocDB sharding layer | YugabyteDB Docs docs

how many nodes should I scale out to test on-premises if I want to see the effect of linear scale writes?

Assuming RF3, you’ll start with 3 nodes.

Assuming you want to scale to 40 nodes.

Create a table like:

create table test(key biginteger primary key, value text) SPLIT INTO 120 tablets;

And assuming you spread the writes on multiple keys (and multiple nodes), you’ll be able to linearly scale number of writes to 40+ nodes.