However, the operator does not reach the “Succeeded” state, but remains at “Pending”:
kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
yugabyte-operator.v0.0.1 Yugabyte Operator 0.0.1 Pending
I believe this error message is responsible:
api-server resource not found installing CustomResourceDefinition ybclusters.yugabyte.com:
GroupVersionKind apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition not found on the
cluster. This API may have been deprecated and removed,
see https://kubernetes.io/docs/reference/using-api/deprecation-guide/ for more information.
I have found this unmerged pull request in the Operator’s Github repo:
Could it be that this addresses my issue? That would mean the Kubernetes Operator hasn’t been updated for a long time, which concerns me quite a bit!
I’ve now decreased the CPU und Memory requirements and the cluster came up - unfortunately it exposed itself to the world.
The Helm chart installed a LoadBalancer and exposed it externally, so now everyone can directly access the database and it appears to be impossible to change. Even if I comment out the entire “serviceEndpoints” section in the values.yml, the chart still creates an external LoadBalancer.
My goal would be to have the yb-tserver-service be available from within the Kubernetes cluster, i. e. apps running on this cluster could access it, but no one from outside the cluster. For external access I have an ingress set up that will only allow access to my applications.
Ok… I found an undocumented(*) option “enableLoadbalancer”, which can be set to false and then the service is started headless. That should work I guess, but wouldn’t it be nicer to have a load-balanced ClusterIP?
(*) There are some hints in the “Kubernetes Operator” section, but not in the Helm chart section.
That wasn’t my experience. In my case LoadBalancers were created with external IPs. Only when setting the parameter to false were the LoadBalancers omitted, but that also omitted the ClusterIP and made the service headless (it was still of type ClusterIP, but with no IP address).
Ok, so yes it creates a headless service.
Why do you want a load balancer when you are in the cluster?
You can connect with the service name and it will go round-robin to each tserver.
$ kubectl get services -n yb-demo-eu-west-1c
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yb-masters ClusterIP None <none> 7000/TCP,7100/TCP 17m
yb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP 17m
I can connect to yb-tservers.yb-demo-eu-west-1a.svc.cluster.local and will go to one of t-servers
Yes, you are absolutely right. I was just wondering whether it would be better to rely on Kubernetes load balancing instead of DNS round robin (which is generally not guaranteed to work in any specific way, although the K8s internal DNS might be implemented that way).
Usually the headless service is OK. No need to add latency though a proxy.
Note that you can also use the YugabyteDB smart drivers which will get the list of nodes from the initial connection and do the round-robin itself
Yes, I am using the smart drivers already. So there is client-side load balancing in place and I suppose for those few requests to occasionally update the list of nodes it really doesn’t matter whether there is DNS round robin or K8s load balancing.