Building recent container image for YugabyteDB with Packer

Hello,

I am using Packer to build YugabyteDB Docker images, and in more recent versions, after the image is built and I attempt to run it, I get this error in the container logs–should I be concerned about it?

2023-05-20 18:19:15,491 INFO: running bootstrap script ... I0520 18:19:15.224205   264 mem_tracker.cc:386] Creating root MemTracker with garbage collection threshold 134217728 bytes
2023-05-20 18:19:15,491 INFO: I0520 18:19:15.225100   264 thread_pool.cc:167] Starting thread pool { name: pggate_ybclient max_workers: 1024 }
2023-05-20 18:19:15,491 INFO: E0520 18:19:15.225721   264 pggate.cc:144] pggate_tserver_shm_fd is not specified
2023-05-20 18:19:15,491 INFO: F0520 18:19:15.225966   264 pggate.cc:145] Check failed: _s.ok() Bad status: IO error (yb/util/shared_mem.cc:65): Error mapping shared memory segment: errno=9: Bad file descriptor
2023-05-20 18:19:15,491 INFO: Fatal failure details written to /.FATAL.details.2023-05-20T18_19_15.pid264.txt
2023-05-20 18:19:15,491 INFO: F20230520 18:19:15 ../../src/yb/yql/pggate/pggate.cc:145] Check failed: _s.ok() Bad status: IO error (yb/util/shared_mem.cc:65): Error mapping shared memory segment: errno=9: Bad file descriptor
2023-05-20 18:19:15,491 INFO:     @     0x7fb1ea45b5fa  google::LogMessage::SendToLog()
2023-05-20 18:19:15,491 INFO:     @     0x7fb1ea45c830  google::LogMessage::Flush()
2023-05-20 18:19:15,491 INFO:     @     0x7fb1ea45cd39  google::LogMessageFatal::~LogMessageFatal()
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ef03c4b2  yb::pggate::PgApiImpl::PgApiImpl()
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ef02f9bb  yb::pggate::YBCInitPgGateEx()
2023-05-20 18:19:15,492 INFO:     @     0x556ded782d2f  YBInitPostgresBackend
2023-05-20 18:19:15,492 INFO:     @     0x556ded760ac4  InitPostgresImpl
2023-05-20 18:19:15,492 INFO:     @     0x556ded760345  InitPostgres
2023-05-20 18:19:15,492 INFO:     @     0x556ded194e4c  BootstrapModeMain
2023-05-20 18:19:15,492 INFO:     @     0x556ded1949e2  AuxiliaryProcessMain
2023-05-20 18:19:15,492 INFO:     @     0x556ded3de71a  PostgresServerProcessMain
2023-05-20 18:19:15,492 INFO:     @     0x556ded0a8882  main
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ee2d1825  __libc_start_main
2023-05-20 18:19:15,492 INFO:     @     0x556ded0a8799  _start
2023-05-20 18:19:15,492 INFO: 
2023-05-20 18:19:15,492 INFO: *** Check failure stack trace: ***
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ea45be96  google::LogMessage::SendToLog()
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ea45c830  google::LogMessage::Flush()
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ea45cd39  google::LogMessageFatal::~LogMessageFatal()
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ef03c4b2  yb::pggate::PgApiImpl::PgApiImpl()
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ef02f9bb  yb::pggate::YBCInitPgGateEx()
2023-05-20 18:19:15,492 INFO:     @     0x556ded782d2f  YBInitPostgresBackend
2023-05-20 18:19:15,492 INFO:     @     0x556ded760ac4  InitPostgresImpl
2023-05-20 18:19:15,492 INFO:     @     0x556ded760345  InitPostgres
2023-05-20 18:19:15,492 INFO:     @     0x556ded194e4c  BootstrapModeMain
2023-05-20 18:19:15,492 INFO:     @     0x556ded1949e2  AuxiliaryProcessMain
2023-05-20 18:19:15,492 INFO:     @     0x556ded3de71a  PostgresServerProcessMain
2023-05-20 18:19:15,492 INFO:     @     0x556ded0a8882  main
2023-05-20 18:19:15,492 INFO:     @     0x7fb1ee2d1825  __libc_start_main
2023-05-20 18:19:15,493 INFO:     @     0x556ded0a8799  _start
2023-05-20 18:19:15,493 INFO: Aborted
2023-05-20 18:19:15,493 INFO: child process exited with exit code 134initdb: removing data directory "/opt/yugabyte/data/yb-ctl_tmp_pg_data_7b4feba7515514aab3abdbc4da05ff42"
2023-05-20 18:19:15,493 INFO: Successfully ran initdb to initialize YSQL data in the YugaByte cluster

You can see this for yourself by running the following command:

  • docker run -it --rm -v yugabyte:/opt/yugabyte/data/node-1/disk-1 -p 7000:7000 harbor.klmh.co/klm/yugabyte:2.17.3.0-b152 --placement-info=klm.us-west.localdev

Note that the container also sometimes exits with status code 1 in this case, but I can’t tell if it’s related to the above messages.

Any help is greatly appreciated–thank you!!

What’s the contents of this file?

Hi @dorian_yugabyte , my apologies for the late reply. It has the same contents as the Check failed line:

root@38c15436ab71:/home/yugabyte# cat /.FATAL.details.2023-05-25T16_10_33.pid208.txt 
F20230525 16:10:33 ../../src/yb/yql/pggate/pggate.cc:145] Check failed: _s.ok() Bad status: IO error (yb/util/shared_mem.cc:65): Error mapping shared memory segment: errno=9: Bad file descriptor
    @     0x7fe92aa365fa  google::LogMessage::SendToLog()
    @     0x7fe92aa37830  google::LogMessage::Flush()
    @     0x7fe92aa37d39  google::LogMessageFatal::~LogMessageFatal()
    @     0x7fe92f6174b2  yb::pggate::PgApiImpl::PgApiImpl()
    @     0x7fe92f60a9bb  yb::pggate::YBCInitPgGateEx()
    @     0x55ee7dbadd2f  YBInitPostgresBackend
    @     0x55ee7db8bac4  InitPostgresImpl
    @     0x55ee7db8b345  InitPostgres
    @     0x55ee7d5bfe4c  BootstrapModeMain
    @     0x55ee7d5bf9e2  AuxiliaryProcessMain
    @     0x55ee7d80971a  PostgresServerProcessMain
    @     0x55ee7d4d3882  main
    @     0x7fe92e8ac825  __libc_start_main
    @     0x55ee7d4d3799  _start

Note that I was previously running 2.15.x, and did not run into this problem–it only started with 2.16 & 2.17 versions

Do you mean the db continues working after? I’m assuming it crashes, correct?

Correct, it crashes, and will not run successfully.

@dorian_yugabyte I think it fails because of the last error in the log block below–basically that it can’t find a table system.peers_v2, since the initialization never completed because of the earlier Check failed error. Here’s the full log:

$ docker run -it --rm -v yugabyte:/opt/yugabyte/data/node-1/disk-1 -p 7000:7000 harbor.klmh.co/klm/yugabyte:2.17.3.0-b152 --placement-info=klm.us-west.localdev
2023-05-25 16:10:28.469680330+00:00 - INFO - Starting YugabyteDB with RF=1 Drives=3 in '/opt/yugabyte/data' for 'klm.us-west.localdev'
2023-05-25 16:10:28,576 INFO: Considering YugaByte DB installation directory candidate: /home/yugabyte
2023-05-25 16:10:28,576 INFO: Found YugaByte DB installation directory: /home/yugabyte
Starting cluster with base directory /opt/yugabyte/data
2023-05-25 16:10:28,577 INFO: Number of servers for daemon type master determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/master, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/master']
2023-05-25 16:10:28,577 INFO: Number of servers for daemon type master determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/master, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/master']
2023-05-25 16:10:28,577 INFO: Using binaries path: /home/yugabyte
2023-05-25 16:10:28,577 INFO: Found binary path for daemon type master: /home/yugabyte/bin/yb-master
2023-05-25 16:10:28,584 INFO: Using binaries path: /home/yugabyte
2023-05-25 16:10:28,584 INFO: Found binary path for daemon type master: /home/yugabyte/bin/yb-master
2023-05-25 16:10:28,585 INFO: Starting master-1 with:
/home/yugabyte/bin/yb-master --fs_data_dirs "/opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3" --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root "/home/yugabyte/www" --replication_factor=1 --yb_num_shards_per_tserver 2 --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.35 --master_addresses 127.0.0.1:7100 --enable_ysql=true --placement_cloud klm --placement_region us-west --placement_zone localdev >"/opt/yugabyte/data/node-1/disk-1/master.out" 2>"/opt/yugabyte/data/node-1/disk-1/master.err" &
2023-05-25 16:10:28,588 INFO: Number of servers for daemon type tserver determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/tserver, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/tserver']
2023-05-25 16:10:28,588 INFO: Using binaries path: /home/yugabyte
2023-05-25 16:10:28,588 INFO: Found binary path for daemon type tserver: /home/yugabyte/bin/yb-tserver
2023-05-25 16:10:28,592 INFO: is_ysql_enabled returning True
2023-05-25 16:10:28,592 INFO: Using binaries path: /home/yugabyte
2023-05-25 16:10:28,592 INFO: Found binary path for daemon type tserver: /home/yugabyte/bin/yb-tserver
2023-05-25 16:10:28,593 INFO: Starting tserver-1 with:
/home/yugabyte/bin/yb-tserver --fs_data_dirs "/opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3" --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root "/home/yugabyte/www" --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=0.0.0.0:6379 --cql_proxy_bind_address=0.0.0.0:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.65 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --ysql_enable_auth=true --use_cassandra_authentication=true --placement_cloud klm --placement_region us-west --placement_zone localdev >"/opt/yugabyte/data/node-1/disk-1/tserver.out" 2>"/opt/yugabyte/data/node-1/disk-1/tserver.err" &
Waiting for cluster to be ready.
2023-05-25 16:10:28,596 INFO: Waiting for master and tserver processes to come up.
2023-05-25 16:10:28,596 INFO: Number of servers for daemon type master determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/master, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/master']
2023-05-25 16:10:28,601 INFO: Data dirs of the running process with daemon id master-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:28,601 INFO: pgrep output when looking for daemon id master-1: 20 /home/yugabyte/bin/yb-master --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --replication_factor=1 --yb_num_shards_per_tserver 2 --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.35 --master_addresses 127.0.0.1:7100 --enable_ysql=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:28,601 INFO: Number of servers for daemon type tserver determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/tserver, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/tserver']
2023-05-25 16:10:28,606 INFO: Data dirs of the running process with daemon id tserver-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:28,606 INFO: pgrep output when looking for daemon id tserver-1: 23 /home/yugabyte/bin/yb-tserver --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=0.0.0.0:6379 --cql_proxy_bind_address=0.0.0.0:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.65 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --ysql_enable_auth=true --use_cassandra_authentication=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:28,606 INFO: Using binaries path: /home/yugabyte
2023-05-25 16:10:28,606 INFO: Waiting for master leader election and tablet server registration.
2023-05-25 16:10:28,607 INFO: Number of servers for daemon type master determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/master, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/master']
2023-05-25 16:10:28,611 INFO: Data dirs of the running process with daemon id master-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:28,611 INFO: pgrep output when looking for daemon id master-1: 20 /home/yugabyte/bin/yb-master --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --replication_factor=1 --yb_num_shards_per_tserver 2 --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.35 --master_addresses 127.0.0.1:7100 --enable_ysql=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:28,612 INFO: Number of servers for daemon type tserver determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/tserver, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/tserver']
2023-05-25 16:10:28,616 INFO: Data dirs of the running process with daemon id tserver-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:28,616 INFO: pgrep output when looking for daemon id tserver-1: 23 /home/yugabyte/bin/yb-tserver --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=0.0.0.0:6379 --cql_proxy_bind_address=0.0.0.0:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.65 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --ysql_enable_auth=true --use_cassandra_authentication=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:28,617 INFO: Number of servers for daemon type tserver determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/tserver, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/tserver']
2023-05-25 16:10:28,622 INFO: Data dirs of the running process with daemon id tserver-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:28,622 INFO: pgrep output when looking for daemon id tserver-1: 23 /home/yugabyte/bin/yb-tserver --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=0.0.0.0:6379 --cql_proxy_bind_address=0.0.0.0:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.65 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --ysql_enable_auth=true --use_cassandra_authentication=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:31,908 INFO: Waiting for all tablet servers to register: 0/1
2023-05-25 16:10:32,910 INFO: Number of servers for daemon type master determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/master, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/master']
2023-05-25 16:10:32,923 INFO: Data dirs of the running process with daemon id master-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:32,924 INFO: pgrep output when looking for daemon id master-1: 20 /home/yugabyte/bin/yb-master --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --replication_factor=1 --yb_num_shards_per_tserver 2 --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.35 --master_addresses 127.0.0.1:7100 --enable_ysql=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:32,925 INFO: Number of servers for daemon type tserver determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/tserver, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/tserver']
2023-05-25 16:10:32,940 INFO: Data dirs of the running process with daemon id tserver-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:32,941 INFO: pgrep output when looking for daemon id tserver-1: 23 /home/yugabyte/bin/yb-tserver --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=0.0.0.0:6379 --cql_proxy_bind_address=0.0.0.0:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.65 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --ysql_enable_auth=true --use_cassandra_authentication=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:32,942 INFO: Number of servers for daemon type tserver determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/tserver, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/tserver']
2023-05-25 16:10:32,957 INFO: Data dirs of the running process with daemon id tserver-1: /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3
2023-05-25 16:10:32,958 INFO: pgrep output when looking for daemon id tserver-1: 23 /home/yugabyte/bin/yb-tserver --fs_data_dirs /opt/yugabyte/data/node-1/disk-1,/opt/yugabyte/data/node-1/disk-2,/opt/yugabyte/data/node-1/disk-3 --webserver_interface 0.0.0.0 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/home/yugabyte --webserver_doc_root /home/yugabyte/www --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=0.0.0.0:6379 --cql_proxy_bind_address=0.0.0.0:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --default_memory_limit_to_ram_ratio=0.65 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --ysql_enable_auth=true --use_cassandra_authentication=true --placement_cloud klm --placement_region us-west --placement_zone localdev
2023-05-25 16:10:33,076 INFO: Using binaries path: /home/yugabyte
2023-05-25 16:10:33,185 INFO: Successfully modified placement info.
YSQL has been enabled on an existing cluster, running initdb
2023-05-25 16:10:33,185 INFO: Using binaries path: /home/yugabyte
2023-05-25 16:10:33,186 INFO: Running initdb to initialize YSQL metadata in the YugaByte DB cluster.
2023-05-25 16:10:34,190 INFO: initdb log file (from /opt/yugabyte/data/initdb.log):
2023-05-25 16:10:34,191 INFO: The files belonging to this database system will be owned by user "root".
2023-05-25 16:10:34,192 INFO: This user must also own the server process.
2023-05-25 16:10:34,192 INFO: 
2023-05-25 16:10:34,192 INFO: In YugabyteDB, setting LC_COLLATE to C and all other locale settings to en_US.UTF-8 by default. Locale support will be enhanced as part of addressing https://github.com/yugabyte/yugabyte-db/issues/1557
2023-05-25 16:10:34,193 INFO: The database cluster will be initialized with locales
2023-05-25 16:10:34,193 INFO:   COLLATE:  C
2023-05-25 16:10:34,193 INFO:   CTYPE:    en_US.UTF-8
2023-05-25 16:10:34,193 INFO:   MESSAGES: en_US.UTF-8
2023-05-25 16:10:34,194 INFO:   MONETARY: en_US.UTF-8
2023-05-25 16:10:34,194 INFO:   NUMERIC:  en_US.UTF-8
2023-05-25 16:10:34,194 INFO:   TIME:     en_US.UTF-8
2023-05-25 16:10:34,194 INFO: The default database encoding has accordingly been set to "UTF8".
2023-05-25 16:10:34,194 INFO: The default text search configuration will be set to "english".
2023-05-25 16:10:34,195 INFO: 
2023-05-25 16:10:34,195 INFO: Data page checksums are disabled.
2023-05-25 16:10:34,195 INFO: 
2023-05-25 16:10:34,195 INFO: creating directory /opt/yugabyte/data/yb-ctl_tmp_pg_data_f3608edca4cd3330dc7fef1fe1db54b2 ... ok
2023-05-25 16:10:34,195 INFO: creating subdirectories ... ok
2023-05-25 16:10:34,196 INFO: selecting default max_connections ... 300
2023-05-25 16:10:34,196 INFO: selecting default shared_buffers ... 128MB
2023-05-25 16:10:34,196 INFO: selecting dynamic shared memory implementation ... posix
2023-05-25 16:10:34,196 INFO: creating configuration files ... ok
2023-05-25 16:10:34,197 INFO: running bootstrap script ... I0525 16:10:33.787078   208 mem_tracker.cc:386] Creating root MemTracker with garbage collection threshold 134217728 bytes
2023-05-25 16:10:34,197 INFO: I0525 16:10:33.787910   208 thread_pool.cc:167] Starting thread pool { name: pggate_ybclient max_workers: 1024 }
2023-05-25 16:10:34,197 INFO: E0525 16:10:33.788323   208 pggate.cc:144] pggate_tserver_shm_fd is not specified
2023-05-25 16:10:34,197 INFO: F0525 16:10:33.788460   208 pggate.cc:145] Check failed: _s.ok() Bad status: IO error (yb/util/shared_mem.cc:65): Error mapping shared memory segment: errno=9: Bad file descriptor
2023-05-25 16:10:34,198 INFO: Fatal failure details written to /.FATAL.details.2023-05-25T16_10_33.pid208.txt
2023-05-25 16:10:34,198 INFO: F20230525 16:10:33 ../../src/yb/yql/pggate/pggate.cc:145] Check failed: _s.ok() Bad status: IO error (yb/util/shared_mem.cc:65): Error mapping shared memory segment: errno=9: Bad file descriptor
2023-05-25 16:10:34,198 INFO:     @     0x7fe92aa365fa  google::LogMessage::SendToLog()
2023-05-25 16:10:34,199 INFO:     @     0x7fe92aa37830  google::LogMessage::Flush()
2023-05-25 16:10:34,199 INFO:     @     0x7fe92aa37d39  google::LogMessageFatal::~LogMessageFatal()
2023-05-25 16:10:34,199 INFO:     @     0x7fe92f6174b2  yb::pggate::PgApiImpl::PgApiImpl()
2023-05-25 16:10:34,199 INFO:     @     0x7fe92f60a9bb  yb::pggate::YBCInitPgGateEx()
2023-05-25 16:10:34,199 INFO:     @     0x55ee7dbadd2f  YBInitPostgresBackend
2023-05-25 16:10:34,200 INFO:     @     0x55ee7db8bac4  InitPostgresImpl
2023-05-25 16:10:34,200 INFO:     @     0x55ee7db8b345  InitPostgres
2023-05-25 16:10:34,200 INFO:     @     0x55ee7d5bfe4c  BootstrapModeMain
2023-05-25 16:10:34,200 INFO:     @     0x55ee7d5bf9e2  AuxiliaryProcessMain
2023-05-25 16:10:34,201 INFO:     @     0x55ee7d80971a  PostgresServerProcessMain
2023-05-25 16:10:34,201 INFO:     @     0x55ee7d4d3882  main
2023-05-25 16:10:34,201 INFO:     @     0x7fe92e8ac825  __libc_start_main
2023-05-25 16:10:34,201 INFO:     @     0x55ee7d4d3799  _start
2023-05-25 16:10:34,201 INFO: 
2023-05-25 16:10:34,202 INFO: *** Check failure stack trace: ***
2023-05-25 16:10:34,202 INFO:     @     0x7fe92aa36e96  google::LogMessage::SendToLog()
2023-05-25 16:10:34,202 INFO:     @     0x7fe92aa37830  google::LogMessage::Flush()
2023-05-25 16:10:34,202 INFO:     @     0x7fe92aa37d39  google::LogMessageFatal::~LogMessageFatal()
2023-05-25 16:10:34,203 INFO:     @     0x7fe92f6174b2  yb::pggate::PgApiImpl::PgApiImpl()
2023-05-25 16:10:34,203 INFO:     @     0x7fe92f60a9bb  yb::pggate::YBCInitPgGateEx()
2023-05-25 16:10:34,203 INFO:     @     0x55ee7dbadd2f  YBInitPostgresBackend
2023-05-25 16:10:34,203 INFO:     @     0x55ee7db8bac4  InitPostgresImpl
2023-05-25 16:10:34,203 INFO:     @     0x55ee7db8b345  InitPostgres
2023-05-25 16:10:34,204 INFO:     @     0x55ee7d5bfe4c  BootstrapModeMain
2023-05-25 16:10:34,204 INFO:     @     0x55ee7d5bf9e2  AuxiliaryProcessMain
2023-05-25 16:10:34,204 INFO:     @     0x55ee7d80971a  PostgresServerProcessMain
2023-05-25 16:10:34,204 INFO:     @     0x55ee7d4d3882  main
2023-05-25 16:10:34,205 INFO:     @     0x7fe92e8ac825  __libc_start_main
2023-05-25 16:10:34,205 INFO:     @     0x55ee7d4d3799  _start
2023-05-25 16:10:34,205 INFO: Aborted
2023-05-25 16:10:34,205 INFO: child process exited with exit code 134initdb: removing data directory "/opt/yugabyte/data/yb-ctl_tmp_pg_data_f3608edca4cd3330dc7fef1fe1db54b2"
2023-05-25 16:10:34,206 INFO: Successfully ran initdb to initialize YSQL data in the YugaByte cluster
2023-05-25 16:10:34,208 INFO: Number of servers for daemon type master determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/master, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/master']
2023-05-25 16:10:34,209 INFO: Number of servers for daemon type tserver determined as 1 based on glob results for data dirs: pattern is /opt/yugabyte/data/*/disk-1/yb-data/tserver, results are: ['/opt/yugabyte/data/node-1/disk-1/yb-data/tserver']
2023-05-25 16:10:34,210 INFO: is_ysql_enabled returning True
----------------------------------------------------------------------------------------------------
| Node Count: 1 | Replication Factor: 1                                                            |
----------------------------------------------------------------------------------------------------
| JDBC                : jdbc:postgresql://127.0.0.1:5433/yugabyte                                  |
| YSQL Shell          : bin/ysqlsh                                                                 |
| YCQL Shell          : bin/ycqlsh                                                                 |
| YEDIS Shell         : bin/redis-cli                                                              |
| Web UI              : http://127.0.0.1:7000/                                                     |
| Cluster Data        : /opt/yugabyte/data                                                         |
----------------------------------------------------------------------------------------------------

For more info, please use: yb-ctl --data_dir /opt/yugabyte/data status
2023-05-25 16:10:34.442599399+00:00 - INFO - Yugabyte YSQL server appears to be ready for connections
2023-05-25 16:10:35.345497754+00:00 - INFO - Yugabyte YCQL server appears to be ready for connections
2023-05-25 16:10:35.591636335+00:00 - INFO - Yugabyte YEDIS server appears to be ready for connections
2023-05-25 16:10:35.593521852+00:00 - INFO - Setting Yugabyte YEDIS server password ...
==> /opt/yugabyte/data/node-1/disk-1/master.err <==

==> /opt/yugabyte/data/node-1/disk-1/master.out <==

==> /opt/yugabyte/data/node-1/disk-1/tserver.err <==
2023-05-25 16:10:28.855 UTC [93] LOG:  YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2023-05-25 16:10:28.873 UTC [93] LOG:  pgaudit extension initialized
2023-05-25 16:10:28.874 UTC [93] LOG:  listening on IPv4 address "0.0.0.0", port 5433
2023-05-25 16:10:28.910 UTC [93] LOG:  listening on Unix socket "/tmp/.yb.0.0.0.0:5433/.s.PGSQL.5433"
2023-05-25 16:10:28.966 UTC [93] LOG:  redirecting log output to logging collector process
2023-05-25 16:10:28.966 UTC [93] HINT:  Future log output will appear in directory "/opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs".

==> /opt/yugabyte/data/node-1/disk-1/tserver.out <==

==> /opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.INFO <==
I0525 16:10:32.493129    87 heartbeater.cc:409] P 014a9d065c814809acb829b574214464: Sending a full tablet report to master...
I0525 16:10:32.541019    87 tablet_server.cc:1001] Invalidating the entire PgTableCache cache since catalog version incremented
I0525 16:10:35.011616   241 permissions.cc:81] Creating permissions cache
W0525 16:10:35.012377   241 cql_processor.cc:380] Unsupported driver option DRIVER_VERSION = 3.25.0.1
I0525 16:10:35.251691   228 client_master_rpc.cc:76] 0x000055c3b8ad6fd8 -> GetTableSchemaRpc(table_identifier: table_name: "peers_v2" namespace { name: "system" database_type: YQL_DATABASE_CQL }, num_attempts: 1): Failed, got resp error: Not found (yb/master/catalog_manager.cc:5276): Table system.peers_v2 not found: OBJECT_NOT_FOUND (master error 3)
W0525 16:10:35.251786   228 client-internal.cc:1396] GetTableSchemaRpc(table_identifier: table_name: "peers_v2" namespace { name: "system" database_type: YQL_DATABASE_CQL }, num_attempts: 1) failed: Not found (yb/master/catalog_manager.cc:5276): Table system.peers_v2 not found: OBJECT_NOT_FOUND (master error 3)
W0525 16:10:35.251893   241 process_context.cc:185] SQL Error: Object Not Found
SELECT * FROM system.peers_v2
              ^^^^^^^^^^^^^^^
I0525 16:10:35.349646   254 meta_data_cache.cc:80] Creating a metadata cache without a permissions cache
tail: cannot open '/opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.ERROR' for reading: No such file or directory

==> /opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.WARNING <==
W0525 16:10:28.840728    87 heartbeater.cc:683] P 014a9d065c814809acb829b574214464: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:498): Master is no longer the leader: code: NOT_THE_LEADER status { code: SERVICE_UNAVAILABLE message: "Catalog manager is not initialized. State: 1" source_file: "../../src/yb/master/scoped_leader_shared_lock.cc" source_line: 92 errors: "\000" } tries=5, num=1, masters=0x000055c3b7f729a8 -> [[127.0.0.1:7100]], code=Service unavailable
W0525 16:10:29.841387    87 heartbeater.cc:683] P 014a9d065c814809acb829b574214464: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:498): Master is no longer the leader: code: NOT_THE_LEADER status { code: ILLEGAL_STATE message: "Not the leader. Local UUID: b6dd5d3720d64fb490ece70438b0a43e, Consensus state: current_term: 2 config { opid_index: -1 peers { permanent_uuid: \"b6dd5d3720d64fb490ece70438b0a43e\" member_type: VOTER last_known_private_addr { host: \"127.0.0.1\" port: 7100 } cloud_info { placement_cloud: \"klm\" placement_region: \"us-west\" placement_zone: \"localdev\" } } }" source_file: "../../src/yb/master/scoped_leader_shared_lock.cc" source_line: 114 errors: "\000" } tries=6, num=1, masters=0x000055c3b7f729a8 -> [[127.0.0.1:7100]], code=Service unavailable
W0525 16:10:30.842041    87 heartbeater.cc:683] P 014a9d065c814809acb829b574214464: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:498): Master is no longer the leader: code: NOT_THE_LEADER status { code: ILLEGAL_STATE message: "Not the leader. Local UUID: b6dd5d3720d64fb490ece70438b0a43e, Consensus state: current_term: 2 config { opid_index: -1 peers { permanent_uuid: \"b6dd5d3720d64fb490ece70438b0a43e\" member_type: VOTER last_known_private_addr { host: \"127.0.0.1\" port: 7100 } cloud_info { placement_cloud: \"klm\" placement_region: \"us-west\" placement_zone: \"localdev\" } } }" source_file: "../../src/yb/master/scoped_leader_shared_lock.cc" source_line: 114 errors: "\000" } tries=7, num=1, masters=0x000055c3b7f729a8 -> [[127.0.0.1:7100]], code=Service unavailable
W0525 16:10:31.379715    87 heartbeater.cc:683] P 014a9d065c814809acb829b574214464: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:498): Master is no longer the leader: code: NOT_THE_LEADER status { code: ILLEGAL_STATE message: "Not the leader. Local UUID: b6dd5d3720d64fb490ece70438b0a43e, Consensus state: current_term: 2 config { opid_index: -1 peers { permanent_uuid: \"b6dd5d3720d64fb490ece70438b0a43e\" member_type: VOTER last_known_private_addr { host: \"127.0.0.1\" port: 7100 } cloud_info { placement_cloud: \"klm\" placement_region: \"us-west\" placement_zone: \"localdev\" } } }" source_file: "../../src/yb/master/scoped_leader_shared_lock.cc" source_line: 114 errors: "\000" } tries=8, num=1, masters=0x000055c3b7f729a8 -> [[127.0.0.1:7100]], code=Service unavailable
W0525 16:10:31.666272    87 heartbeater.cc:683] P 014a9d065c814809acb829b574214464: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:498): Master is no longer the leader: code: NOT_THE_LEADER status { code: ILLEGAL_STATE message: "Not the leader. Local UUID: b6dd5d3720d64fb490ece70438b0a43e, Consensus state: current_term: 2 config { opid_index: -1 peers { permanent_uuid: \"b6dd5d3720d64fb490ece70438b0a43e\" member_type: VOTER last_known_private_addr { host: \"127.0.0.1\" port: 7100 } cloud_info { placement_cloud: \"klm\" placement_region: \"us-west\" placement_zone: \"localdev\" } } }" source_file: "../../src/yb/master/scoped_leader_shared_lock.cc" source_line: 114 errors: "\000" } tries=9, num=1, masters=0x000055c3b7f729a8 -> [[127.0.0.1:7100]], code=Service unavailable
W0525 16:10:35.012377   241 cql_processor.cc:380] Unsupported driver option DRIVER_VERSION = 3.25.0.1
W0525 16:10:35.251786   228 client-internal.cc:1396] GetTableSchemaRpc(table_identifier: table_name: "peers_v2" namespace { name: "system" database_type: YQL_DATABASE_CQL }, num_attempts: 1) failed: Not found (yb/master/catalog_manager.cc:5276): Table system.peers_v2 not found: OBJECT_NOT_FOUND (master error 3)
W0525 16:10:35.251893   241 process_context.cc:185] SQL Error: Object Not Found
SELECT * FROM system.peers_v2
              ^^^^^^^^^^^^^^^

==> /opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.INFO <==
I0525 16:10:38.754505    89 call_home.cc:72] Skipping collector MetricsCollector because it has a higher collection level than the requested one
I0525 16:10:38.754643    89 call_home.cc:72] Skipping collector RpcsCollector because it has a higher collection level than the requested one
I0525 16:10:38.761816    89 call_home.cc:215] Done with collector GFlagsCollector
I0525 16:10:38.761910    89 server_base.cc:248] Running on host: 38c15436ab71
I0525 16:10:38.762037    89 call_home.cc:94] Logged in user: root
I0525 16:10:38.762090    89 call_home.cc:215] Done with collector BasicCollector
I0525 16:10:38.762125    89 call_home.cc:215] Done with collector TabletsCollector

Okay, I think the above error message was a red-herring (unrelated) to the startup problem. Here’s what I see in tserver.err:

[ ... ]
2023-05-27 00:10:06.402 GMT [898] LOG:  skipping missing configuration file "/opt/yugabyte/data/node-1/disk-1/pg_data/postgresql.auto.conf"
postgres: could not find the database system
Expected to find it in the directory "/opt/yugabyte/data/node-1/disk-1/pg_data",
but could not open file "/opt/yugabyte/data/node-1/disk-1/pg_data/global/pg_control": No such file or directory
2023-05-27 00:10:06.488 GMT [899] LOG:  skipping missing configuration file "/opt/yugabyte/data/node-1/disk-1/pg_data/postgresql.auto.conf"
postgres: could not find the database system
Expected to find it in the directory "/opt/yugabyte/data/node-1/disk-1/pg_data",
but could not open file "/opt/yugabyte/data/node-1/disk-1/pg_data/global/pg_control": No such file or directory
2023-05-27 00:10:06.569 GMT [900] LOG:  skipping missing configuration file "/opt/yugabyte/data/node-1/disk-1/pg_data/postgresql.auto.conf"
postgres: could not find the database system
Expected to find it in the directory "/opt/yugabyte/data/node-1/disk-1/pg_data",
but could not open file "/opt/yugabyte/data/node-1/disk-1/pg_data/global/pg_control": No such file or directory
2023-05-27 00:10:06.646 GMT [904] LOG:  skipping missing configuration file "/opt/yugabyte/data/node-1/disk-1/pg_data/postgresql.auto.conf"
postgres: could not find the database system
Expected to find it in the directory "/opt/yugabyte/data/node-1/disk-1/pg_data",
but could not open file "/opt/yugabyte/data/node-1/disk-1/pg_data/global/pg_control": No such file or directory
2023-05-27 00:10:06.721 GMT [905] LOG:  skipping missing configuration file "/opt/yugabyte/data/node-1/disk-1/pg_data/postgresql.auto.conf"
postgres: could not find the database system
Expected to find it in the directory "/opt/yugabyte/data/node-1/disk-1/pg_data",
but could not open file "/opt/yugabyte/data/node-1/disk-1/pg_data/global/pg_control": No such file or directory

For some reason, it can’t find the pg_control file at the path specified, which does seem to be true–that files is not there?

root@deef94b4d273:/home/yugabyte# tree /opt/yugabyte/data/node-1/disk-1/pg_data
/opt/yugabyte/data/node-1/disk-1/pg_data
|-- PG_VERSION
|-- base
|   `-- 1
|-- global
|-- pg_commit_ts
|-- pg_dynshmem
|-- pg_logical
|   |-- mappings
|   `-- snapshots
|-- pg_multixact
|   |-- members
|   `-- offsets
|-- pg_notify
|   `-- 0000
|-- pg_replslot
|-- pg_serial
|-- pg_snapshots
|-- pg_stat
|-- pg_stat_tmp
|-- pg_subtrans
|-- pg_tblspc
|-- pg_twophase
|-- pg_wal
|   `-- archive_status
|-- pg_xact
|-- postgresql.conf
|-- ysql_hba.conf
`-- ysql_pg.conf

23 directories, 5 files

Do I need to copy that file over myself?

I don’t know why, but it doesn’t seem to be having an issue now staying up. It throws the exception Check error still as above, but won’t exit the container as a result.

Can you paste all the steps that you did so we can reproduce?