Large cluster perf #1 - 25 nodes

We are currently doing performance benchmarking with large YugaByte cluster, and are excited to share the results of the first benchmark in this series.

Setup

Here is the benchmark setup:

  • 25 nodes in Google Compute (GCP)
  • Each node is a n1-standard-16
    • 16 vcpu’s
    • Intel(R) Xeon(R) CPU @ 2.20GHz CPUs
    • 60GB RAM
    • 2 x 375 GB direct attached SSD
  • Replication factor = 3
  • YugaByte Cassandra key-value workload
    • 40 byte keys
    • 16 byte values

You can find the source code for the key-value application here, and some documentation on developing apps on YugaByte.

100% Reads

  • 1.3M read ops/sec
  • Around 0.3ms latency on the server side
  • 65% average CPU on the YugaByte nodes

100% Writes

  • 500K write iops/sec
  • Around 1.5ms latency on the server side
  • 67% average CPU on the YugaByte nodes

We are able to see a linear scale-out as we go from 3 nodes all the way to 25 nodes. Stay tuned for results with larger cluster sizes as well as YCSB benchmarks!

1 Like

If you guys can benchmark with in-memory & against scylladb too that would be great.

Will do @ddorian43 - give us some time, but we’ll get to it soon.

By in-memory - did you mean in-memory tables?

Meaning data smaller than memory. So after warmup, you’re testing the caching layer when doing read-requests only.

1 Like

I was curious with @ddorian43 question.Did YB do a benchmark ?

Hi @harsha549,

Yes, we did try this out and actually ascertain that we can serve sub-millisecond latencies on a public cloud. Here is post detailing this (this is an older post, our perf is much better compared to this older version): Achieving Sub-ms Latencies on Large Datasets in Public Clouds | Yugabyte

Summary:

We loaded about 1.4TB across 4 nodes, and configured the block cache on each node to be only 7.5GB. On four 8-cpu machines, we were able to get about 77K ops/second with average read latencies of 0.88 ms.