Server is not responding when running a query in ysqlsh

Hello. The server is not responding when running a query in ysqlsh. The ysqlsh process consumes all server memory.
If you run the query in dbeaver or use EXPLAIN ANALYZE, the server works correctly. I think the problem occurs when trying to output 15 million rows in cli ysqlsh.
Can you suggest what is the reason for this behavior?

Positive case

ysqlsh (11.2-YB-2.21.0.0-b0)
yugabyte=# explain analyze select * from customer;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------|
Seq Scan on customer (cost=0.00…100.00 rows=1000 width=1014) (actual
time=5.981…40056.535 rows=15000000 loops=1)
Planning Time: 3.617 ms
Execution Time: 45442.748 ms
Peak Memory Usage: 24 kB
(4 rows)

Negative case

ysqlsh (11.2-YB-2.21.0.0-b0)
yugabyte=# select * from customer;

Hi @godofevil

This might be it. The query worked with explain analyze, but now it’s probably trying to load all the rows in-memory to display on ysqlsh. Try adding a LIMIT to the query or please specify the exact error you’re getting.

While this will stream data when querying the db, the postgresql protocol actually returns all rows together, which results in your memory blowing up. Either return less data with LIMIT, or page with “server side cursors” Cursors | YugabyteDB Docs

Oom is disabled on the server. Is it possible to force ysqlsh not to load all rows into memory?

Please see

This should generally apply to ysqlsh CLI too since ysqlsh is based on psql and uses basically the same driver underneath.

I haven’t personally tried it, but I believe this should address your issue.

The relevant snippet from the answer to the Stackoverflow post says:

psql accumulates complete results in client memory by default. This behavior is usual for all libpq based Postgres applications or drivers. The solutions are cursors - then you are fetching only N rows from server. Cursors can be used by psql too. You can change it by setting FETCH_COUNT variable, then it will use cursors with batch retrieval size FETCH_COUNT.

postgres=# \set FETCH_COUNT 1000
postgres=# select * from generate_series(1,100000); -- big query

Can you please give this a try?

1 Like