No of connections are in a single node

Hi
I have 5 node cluster where tservers are running, data is being inserted and when i ran select count(1) from pg_stat_activity in 1st node its showing no of connections are 200, however in other tserver nodes output of select count(1) from pg_stat_activity is coming as 1.
I am using yugabyte version 2.20.1.3.
I could not understand why. Please help me to understand.

pg_stat_activity shows information from the SQL layer, per node. So if you are connected to one node only, and run your inserts there, only the pg_stat_activity on this node will show something. The other nodes are running reads and writes as the storage layer, but this is not reported in pg_stat_activity

Thank you Franck for your reply. My connection string has all nodes mentioned, Its like below
://node1:5433,node2:5433,node3:5433,node4:5433,node5:5433/yugabyte?loadBalanceHosts=true.
And i am seeing CPU high on single node only. That is node 1. Also I tried changing position of nodes in connection string to check if CPU usage shifts to another node. It did not changed. CPU remained high on node 1 only.
Inserts are happening thorugh a Stored Proc, with logic either Insert or Update. Is it something reason behind slow inserts?

Subhankar,

When you created the table for your application:

  1. Did you create the database as COLOCATED?

  2. Did you specify ASC or DESC on the first column of your PRIMARY KEY? This is possible if the table schema came from another PG database.

  3. What is the cardinality of the first column?

Any of these 3 conditions could cause all data to be written to a single tablet rather than being distributed across all nodes, and thus lead to a single node being saturated as you are describing to us. Can you share the schema for the table that is the target for your INSERT/UPDATE?

–Alan

Hi Alan,

  1. Database is not created as colocated.
  2. I did not specify ASC or DESC on first column of your Primary Key. Composite Primary Key its as (varchar, varchar, timestamp with time zone).
  3. We are firing around 15 million records in a single day.

Table Structure is as below:

create table test2 (
column1 character varying(200)
column2 character varying(50)
column3 timestamp with time
column4 character varying(12)
column5 character varying(200)
column6 character varying(12)
column7 character varying(20)
column8 character varying(2)
column9 boolean
column10 character varying(255)
column11 character varying(20)
column12 character varying(255)
column13 character varying(4)
column14 character varying(100)
column15 character varying(60)
column16 character varying(35)
column17 character varying(255)
column18 character varying(11)
column19 character varying(100)
column20 numeric
column21 character varying(40)
column22 character varying(200)
column23 character varying(11)
column24 numeric
column25 character varying(6)
column26 character varying(6)
column27 numeric
column28 boolean
column29 character varying(99)
column30 numeric
column31 numeric
column32 character varying(50)
column33 character varying(50)
column34 boolean
column35 boolean
column36 boolean
column37 character varying(12)
column38 character varying(2)
column39 character varying(12)
column40 character varying(200)
column41 character varying(200)
column42 character varying(200)
column43 character varying(200)
column44 smallint
column45 smallint
column46 smallint
column47 smallint
column48 smallint
column49 smallint
column50 smallint
column51 timestamp without time
column52 timestamp without time
column53 character varying(28)
column54 character varying(20)
column55 character varying(99)
column56 character varying(255)
column57 boolean
column58 character varying(500)
column59 character varying(500)
column60 character varying(200)
column61 character varying(20)
column62 character varying(3)
column63 timestamp without time
column64 timestamp without time
column65 character varying(1)
column66 character varying(2)
column67 character varying(10)
column68 numeric
column69 character varying(40)
column70 character varying(40)
column71 character varying(40)
column72 character varying(40)
column73 character varying(8)
column74 smallint
column75 smallint
column76 smallint
column77 character varying(50)
column78 character varying(50)
column79 character varying(70)
column80 character varying(15)
column81 boolean
column82 character varying(10)
column83 character varying(10)
column84 character varying(50)
column85 character varying(200)
column86 character varying(200)
column87 character varying(128)
column88 character varying(128)
column89 character varying(200)
column90 character varying(200)
column91 character varying(200)
column92 character varying(7)
column93 character varying(10)
column94 character varying(4)
column95 character varying(16)
column96 character varying(200)
column97 character varying(200)
column98 character varying(50)
column99 character varying(50)
column100 date
column101 character varying(27)
column102 character varying(4)
column103 character varying(200)
column104 character varying(200)
primary key (column14,column4,column3))
Partition by RANGE (column3);

Subhankar,

So the table that you displayed does not actually hold any data since you are using PARTITION BY RANGE on column3.

Is it possible to share the schema for the sub-partitions including any tablespace definitions that you have created?

And how many different values does “column14” actually contain?

–Alan

Yes Partition table has the data. And partition table created as
create table test2_part partition of test2 for values from (‘2024-02-01’) to (‘2024-03-01’).
We are inserting to master table for inserts/updates. If we use direct partition table for inserts will it perform better?

Column14 has 15million different values.

And while inserting data we are calling a procedure which has 4 input argument as (column14 character varying, column92 character varying, column3 timestamp with time zone, json_data jsonb, action_preference character varying)
has a logic of Insert or Update. based on action_preference.

Now I am able to manage to distribute connections after adding a property in connection sting as (loadBalanceHosts=true&yb-servers-refresh-interval=300) not evenly but little bit better than previous. Like now if node 1 has 150 connections, other nodes has around 70/80 connections.

However inserts are not upto the mark. Is it because of the procedure i am using for inserts/updates?

Subhankar,

The method of writing through the top-level table is just fine. The internal query rewrite component takes care of this seamlessly. The stored procedure does result in singleton INSERT/UPDATE, which is not the most efficient method for high performance. Generally, for best performance, it is a good idea to batch INSERTs together.

So if you are using Java, you can have JDBC rewrite multiple INSERTS as a single batch. So you could move this work to your application and then process UPDATEs separately.

–Alan

Hi Alan
In stored proc we are trying to implement logic as upsert (insert on conflict update). From application we cannot decide if its an insert or update.
So inside the store proc we are parsing Jsonb object and insert/update statement is created. And payload size is around 5kb.
We are using SAN storage and we are seeing high disk utilization (checking with command iostat -x 1). Although CPU % is using top command is less around 11-13%.

Please help me to understand where It going wrong with inserts. And how to improve insert performance.

Can you show the code & table schema to know exactly what you’re doing?

Local disks are faster. You are replicating twice this way (once at yugabyte level, twice at filesystem level).

Can you fix the disks?

Hi @subh14 , for an UPSERT logic, the best I’ve seen is using a WITH clause and UPDATE/INSERT…RETURNING because those can be batched internally. For different reasons, ON CONFLICT and procedures flush each write without batching them.
I have some examples here:
INSERT ON CONFLICT Series’ Articles - DEV Community

Hi Dorian
We are in a discussion about the disks. Below mentioned table structure and stored function I am using to Insert/Update

create table test2 (
column1 character varying(200)
column2 character varying(50)
column3 timestamp with time
column4 character varying(12)
column5 character varying(200)
column6 character varying(12)
column7 character varying(20)
column8 character varying(2)
column9 boolean
column10 character varying(255)
column11 character varying(20)
column12 character varying(255)
column13 character varying(4)
column14 character varying(100)
column15 character varying(60)
column16 character varying(35)
column17 character varying(255)
column18 character varying(11)
column19 character varying(100)
column20 numeric
column21 character varying(40)
column22 character varying(200)
column23 character varying(11)
column24 numeric
column25 character varying(6)
column26 character varying(6)
column27 numeric
column28 boolean
column29 character varying(99)
column30 numeric
column31 numeric
column32 character varying(50)
column33 character varying(50)
column34 boolean
column35 boolean
column36 boolean
column37 character varying(12)
column38 character varying(2)
column39 character varying(12)
column40 character varying(200)
column41 character varying(200)
column42 character varying(200)
column43 character varying(200)
column44 smallint
column45 smallint
column46 smallint
column47 smallint
column48 smallint
column49 smallint
column50 smallint
column51 timestamp without time
column52 timestamp without time
column53 character varying(28)
column54 character varying(20)
column55 character varying(99)
column56 character varying(255)
column57 boolean
column58 character varying(500)
column59 character varying(500)
column60 character varying(200)
column61 character varying(20)
column62 character varying(3)
column63 timestamp without time
column64 timestamp without time
column65 character varying(1)
column66 character varying(2)
column67 character varying(10)
column68 numeric
column69 character varying(40)
column70 character varying(40)
column71 character varying(40)
column72 character varying(40)
column73 character varying(8)
column74 smallint
column75 smallint
column76 smallint
column77 character varying(50)
column78 character varying(50)
column79 character varying(70)
column80 character varying(15)
column81 boolean
column82 character varying(10)
column83 character varying(10)
column84 character varying(50)
column85 character varying(200)
column86 character varying(200)
column87 character varying(128)
column88 character varying(128)
column89 character varying(200)
column90 character varying(200)
column91 character varying(200)
column92 character varying(7)
column93 character varying(10)
column94 character varying(4)
column95 character varying(16)
column96 character varying(200)
column97 character varying(200)
column98 character varying(50)
column99 character varying(50)
column100 date
column101 character varying(27)
column102 character varying(4)
column103 character varying(200)
column104 character varying(200)
primary key (column14,column4,column3))
Partition by RANGE (column3);

Function Definition:

CREATE OR REPLACE FUNCTION schema_name.test2_func(column14_in character varying, column4_in character varying, column3_in timestamp with time zone, json_data jsonb, action_preference character varying)
RETURNS character varying
LANGUAGE plpgsql
AS $function$
DECLARE
BEGIN
LOOP
IF action_preference = ‘I’ OR action_preference = ‘i’ THEN
BEGIN
INSERT INTO schema_name.test2( column14, column4, …) values (column14, column4,column3, (json_data->>‘column6’)::CHARACTER VARYING, (json_data->>‘column7’)::character VARYING,(json_data->>‘column8’)::CHARACTER VARYING, …);
return ‘INSERTED’;
EXCEPTION WHEN unique_violation THEN
update schema_name.test2 set column4 = (json_data->>‘column4’)::character VARYING, column5 = (json_data->>‘column5’)::CHARACTER VARYING, where column14 = column14_in and column4 = column4_in and column3 = column3_in;
return ‘INSERT_CONFLICT_UPDATED’;
end;
else
update schema_name.test2 set column8 = (json_data->>‘column8’)::CHARACTER VARYING, column9 = (json_data->>‘column9’)::CHARACTER VARYING, column10 = (json_data->>‘column10’)::CHARACTER VARYING where column14 = column14_in and column4 = column4_in and column3 = column3_in;
IF found then
return ‘UPDATED’;
else
BEGIN
INSERT INTO schema_name.test2 ( column1, column2, … ) values (column1, column2,column3, (json_data->>‘column4’)::CHARACTER VARYING, (json_data->>‘column5’)::character VARYING,(json_data->>‘column6’)::CHARACTER VARYING, …);
EXCEPTION WHEN unique_violation THEN

END;
return ‘UPDATED_OR_INSERTED’;
end if;
END IF;
END LOOP;
END;
$function$
;

Thank You Franck for your suggestion, I will also test with WITH clause and update.

This table has 8 Indexes. Out of 8, 7 indexes on varchar column and 1 on date column. And we are receiving around 7K tps.

Any suggestion on how to improve inserts?

Check out our best practices docs: Best practices for YSQL applications | YugabyteDB Docs

Hello, :wave:

In my opinion, It seems there might be an issue with the connection distribution in your cluster. Ensure your load balancer is correctly configured to distribute connections evenly across all nodes. Check for any specific settings or constraints on the 1st node that might be causing this discrepancy.

I hope this will help you.

Regards
Eden :smiling_face_with_three_hearts: