Yugabyte onto AWS EKS Cluster

Hello,
I deployed a Yugabyte cluster onto AWS EKS , everything is working fine but when i open the UI i found that { Illegal state (yb/master/catalog_manager.cc:7704): Unable to list masters during web request handling: Node f02231b946e04c0da4d5f75ed26ea6e7 peer not initialized.}
and when i open the log i found the attached log

Hi @John.Nabil

Can you paste your configuration ?

Hi Dorian
i followed the configuration from Deploy | YugabyteDB Docs
i do the following

  • create storage file
  • create overrides files per az
  • create the name spaces
    then i install via helm command

but i was created the cluster before using terrafrom

Can you show your configuration for yb-tserver & yb-master ? Looks like you’re having a DNS error trying to find the servers, so it’s probably in the configuration. cc @sanketh

i don’t have configurations for yb-master & yb-tserver , this didn’t include in the installation guide
could you please tell me where should i configure those servers
note
i have 6 nodes in 3 AZs
3 nodes for yb-master
3 nodes for yb-tserver
so where can i put those configurations
should i create yb-master file under /bin and put yb-masters IPs and same for yb-tfservers ?
this is the configuration i found it in the website
YB-DB|364x224

also i can’t find the attached path “/home/yugabyte/master/bin/yb-admin” when i login into the master servers

yb-bin

John – Can you post the output of kubectl get svc -A?

John – I see the source of your issue. The namespaces that you created were called out as yb-va-dev-env-us-east-2[a-c]. In your overrides you left off the “env” and just put yb-va-dev-us-east-2[a-c]. This leads to a DNS resolution error because the namespace forms part of the FQDN for the pod.
So you should be specifying:
pod-name.headless-service.namespace.svc.domain where
pod-name is yb-master-0
headless-service is yb-masters
namespace is yb-va-dev-env-us-east-2a
domain is cluster.local

yb-master-0.yb-masters.yb-va-dev-env-us-east-2a.svc.cluster.local

Hello Alan
thanks for your support i think its working now
but i found another error after appears

to me after fixing this error

John – When you originally tried to bring up the masters with the incorrect yb-master addresses it wrote those into an instance file that now doesn’t do you any good. So the system never initialized.
In the notes after you did the helm install, it stated that you needed to manually remove any pvc’s that were created during the deployment, so after you do the helm delete <deployment_name>,
you need to remove the pvcs created with:
kubectl delete pvc -n yb-va-dev-env-us-east-2a -l app=yb-master
kubectl delete pvc -n yb-va-dev-env-us-east-2a -l app=yb-tserver

And repeat for 2b and 2c. That will remove the existing volumes and when you run the helm install it should be good to go.
–Alan

1 Like

Hello Alan
thank you so much for your support
i did like you say but i found the attached errors appears