Error while inserting into newly created table

Hello, I am running Yugabyte on single node, and I am executing via ysqlsh an SQL script on a “fresh” database that creates the database schema and inserts data into some of the newly created tables. The SQL script therefore contains both DDL and DML statements, and these are interspersed (for example, a table is created, then there may be some INSERT statements for this table, then another table is created, and so forth).

I have seen the following happen once, after the following SQL was executed:
CREATE TABLE ptcraneregion
id integer,
pc numeric,
rc numeric,
pd numeric,
rd numeric,
sd numeric,
CONSTRAINT pk_pt_crane_region PRIMARY KEY(id)

insert into ptcraneregion values (1,‘A’,0.000088,3.45,2.27,0.205,1.49);

with the following appearing in the ~/var/logs/tserver/postgresql-2022-03-30_000000.log file:
I0330 09:49:57.246927 297368 ybccmds.c:482] Creating Table my_database.public.ptcraneregion
2022-03-30 09:49:57.383 UTC [297368] ERROR: could not open relation with OID 66783 at character 13
2022-03-30 09:49:57.383 UTC [297368] STATEMENT: insert into ptcraneregion values (1,‘A’,0.000088,3.45,2.27,0.205,1.49);

Is there anything we can do to avoid such error? For example, do we need to wait for some time period to elapse before inserting data into a newly created table?

Your input is greatly appreciated.

Thank you

What is the command you used for creating the cluster?

Can you describe the table to check the definition?

What do you see in the ui for table? Are there any tablet present?

Normally completion of create table should suffice and making the table ready for insertion unles there is some issue with storage. Since thisnis a single node cluster its unlikely to be a network related issue.

Can you also describe your cluster hardware? And were there any error logs when you ran the script on the yb-tserver/yb-master? You can do ./bin/yugabyted collect_logs if you used yugabyted cli to start the node.

Thank you for your input. Unfortunately I no longer have the database with the problem. If this issue occurs again I will perform the checks you suggested.

My setup is a single-node cluster running on my development PC, which is a Dell Optiplex 790 with 16 GB RAM and 8 CPUs (= 4 cores * 2 threads/core).

I started the single-node cluster using the command:
yugabyted start --tserver_flags=“pg_yb_session_timeout_ms=900000”

I checked and there were no error logs from yb-tserver nor from yb-master when I ran the script.

I am now running yugabyte- on 3 VMs, and on each VM there is 1 yb-master and 1 yb-tserver running. Each VM has 8 CPUs and 16 GB RAM. I have just been able to reproduce the issue, but this time for a different table, as seen from the following excerpt from the logs of the PostgreSQL server brought up by the yb-tserver on one of the nodes:

2022-07-20 13:49:27.882 UTC [340693] LOG: statement: CREATE TABLE ptnodedisplaysettings
id integer NOT NULL,
stroke_width integer,
size integer,
font_size integer,
label_y_offset integer,
label_attributes CHARACTER VARYING,
pt_project_name CHARACTER VARYING,
CONSTRAINT pk_pt_node_display_settings PRIMARY KEY (id),
CONSTRAINT pt_node_display_settings_foreign_key_pt_site_constr FOREIGN KEY (pt_site_name, pt_project_name) REFERENCES ptsite(name, pt_project_name) ON DELETE CASCADE
I0720 13:49:27.980901 340693 ybccmds.c:483] Creating Table unims.public.ptnodedisplaysettings
2022-07-20 13:49:32.677 UTC [340693] LOG: statement: INSERT INTO ptnodedisplaysettings (id, color, stroke_color, stroke_width, size, solid_nofill, font_size, label_y_offset, font_weight, l
abel_attributes) VALUES (1, ‘#ff8000’, ‘#ff0000’, 2, 6, ‘Solid’, 11, -34, ‘bold’, ‘name’);
2022-07-20 13:49:32.702 UTC [340693] ERROR: could not open relation with OID 17527 at character 13

After this failure, as I expected I was able to describe the table:

mydb=# \d ptnodedisplaysettings
                 Table "public.ptnodedisplaysettings"
      Column      |       Type        | Collation | Nullable | Default
 id               | integer           |           | not null |
 color            | character varying |           |          |
 stroke_color     | character varying |           |          |
 stroke_width     | integer           |           |          |
 size             | integer           |           |          |
 solid_nofill     | character varying |           |          |
 font_size        | integer           |           |          |
 label_y_offset   | integer           |           |          |
 font_weight      | character varying |           |          |
 label_attributes | character varying |           |          |
 pt_site_name     | character varying |           |          |
 pt_project_name  | character varying |           |          |
    "pk_pt_node_display_settings" PRIMARY KEY, lsm (id HASH)
Foreign-key constraints:
    "pt_node_display_settings_foreign_key_pt_site_constr" FOREIGN KEY (pt_site_name, pt_project_name) REFERENCES ptsite(name, pt_project_name) ON DELETE CASCADE

In the table UI I saw that the ptnodedisplaysettings table had 1 tablet.
Finally, there were no errors logged for yb-master nor yb-tserver on any of the 3 VMs at the time this happened.

The yb-master was started on each node using a command of the following form:
yb-master --master_addresses :7100,:7100,:7100 --rpc_bind_addresses :7100 --fs_data_dirs /data/vlst/yugabyte/yugabyte-

and the yb-tserver was brought up on each VM with a command of this format:
yb-tserver --tserver_master_addrs :7100,:7100,:7100 \
–rpc_bind_addresses :9100 \
–fs_data_dirs /data/vlst/yugabyte/yugabyte-installed/data \
–start_pgsql_proxy \
–pgsql_proxy_bind_address :5433 \
–ysql_log_statement all \
–ysql_timezone XXXXX \
–pg_yb_session_timeout_ms 900000 \
–cql_proxy_bind_address :9042

Your input is greatly appreciated.
Thank you