Yugabyte Kubernetes Operator Installation failed

I’ve installed the Yugabyte K8s Operator as detailed in the docs:

kubectl create -f https://operatorhub.io/install/yugabyte-operator.yaml

However, the operator does not reach the “Succeeded” state, but remains at “Pending”:

kubectl get csv -n operators

NAME                       DISPLAY             VERSION   REPLACES   PHASE
yugabyte-operator.v0.0.1   Yugabyte Operator   0.0.1                Pending

I believe this error message is responsible:

api-server resource not found installing CustomResourceDefinition ybclusters.yugabyte.com:
GroupVersionKind apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition not found on the
cluster. This API may have been deprecated and removed,
see https://kubernetes.io/docs/reference/using-api/deprecation-guide/ for more information.

I have found this unmerged pull request in the Operator’s Github repo:

Could it be that this addresses my issue? That would mean the Kubernetes Operator hasn’t been updated for a long time, which concerns me quite a bit!

Hi, yes those operators are not well maintained. I’ll ping the team.
Did you try the Helm Chart? This is how I deploy on K8s.

The chart failed as well, but for insufficient CPU/Memory requirements it could not schedule pods. I have the default settings:

resource:
  master:
    requests:
      cpu: 2
      memory: 2Gi
    limits:
      cpu: 2
      memory: 2Gi
  tserver:
    requests:
      cpu: 2
      memory: 4Gi
    limits:
      cpu: 2
      memory: 4Gi

My three nodes, where those pods should be scheduled, have 4 vCPU and 8 GB RAM each. Any idea why this is not sufficient?

I’ve now decreased the CPU und Memory requirements and the cluster came up - unfortunately it exposed itself to the world.

The Helm chart installed a LoadBalancer and exposed it externally, so now everyone can directly access the database and it appears to be impossible to change. Even if I comment out the entire “serviceEndpoints” section in the values.yml, the chart still creates an external LoadBalancer.

My goal would be to have the yb-tserver-service be available from within the Kubernetes cluster, i. e. apps running on this cluster could access it, but no one from outside the cluster. For external access I have an ingress set up that will only allow access to my applications.

Ok… I found an undocumented(*) option “enableLoadbalancer”, which can be set to false and then the service is started headless. That should work I guess, but wouldn’t it be nicer to have a load-balanced ClusterIP?

(*) There are some hints in the “Kubernetes Operator” section, but not in the Helm chart section.

Right.
If you want to know more, here is the helm chart template for the service:
charts/service.yaml at master · yugabyte/charts (github.com)

There is enableLoadbalancer=true

And the following:

serviceEndpoints:
  - name: "yb-master-ui"
    type: LoadBalancer
    ## Sets the Service's externalTrafficPolicy
    # externalTrafficPolicy: ""
    app: "yb-master"
    # loadBalancerIP: ""
    ports:
      http-ui: "7000"

  - name: "yb-tserver-service"
    type: LoadBalancer
    ## Sets the Service's externalTrafficPolicy
    # externalTrafficPolicy: ""
    app: "yb-tserver"
    # loadBalancerIP: ""

By default it creates LoadBalancer (yb-master-ui and yb-tserver-service) plus ClusterIP (yb-masters and yb-tservers)

$ kubectl get services -n yb-demo-eu-west-1a
NAME                 TYPE           CLUSTER-IP       PORT(S)
yb-master-ui         LoadBalancer   10.100.197.53    7000:31261/TCP
yb-masters           ClusterIP      None             7000/TCP,7100/TCP
yb-tserver-service   LoadBalancer   10.100.157.121   6379:31669/TCP,9042:31874/TCP,5433:31376/TCP
yb-tservers          ClusterIP      None             9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP

With enableLoadbalancer=false it creates only ClusterIP

1 Like

That wasn’t my experience. In my case LoadBalancers were created with external IPs. Only when setting the parameter to false were the LoadBalancers omitted, but that also omitted the ClusterIP and made the service headless (it was still of type ClusterIP, but with no IP address).

Was a typo, I mean false (will edit the answer)
Let me try again

Ok, so yes it creates a headless service.
Why do you want a load balancer when you are in the cluster?
You can connect with the service name and it will go round-robin to each tserver.

$ kubectl get services -n yb-demo-eu-west-1c
NAME          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                                                                      AGE
yb-masters    ClusterIP   None         <none>        7000/TCP,7100/TCP                                                            17m
yb-tservers   ClusterIP   None         <none>        9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP   17m

I can connect to yb-tservers.yb-demo-eu-west-1a.svc.cluster.local and will go to one of t-servers

$ kubectl exec -it yb-tserver-0 -n yb-demo-eu-west-1c -- ysqlsh -h yb-tservers.yb-demo-eu-west-1a.svc.cluster.local -c 'select inet_server_addr()'
Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup
 inet_server_addr
------------------
 192.168.24.5
(1 row)

$ kubectl exec -it yb-tserver-0 -n yb-demo-eu-west-1c -- ysqlsh -h yb-tservers.yb-demo-eu-west-1a.svc.cluster.local -c 'select inet_server_addr()'
Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup
 inet_server_addr
------------------
 192.168.21.229
(1 row)

Yes, you are absolutely right. I was just wondering whether it would be better to rely on Kubernetes load balancing instead of DNS round robin (which is generally not guaranteed to work in any specific way, although the K8s internal DNS might be implemented that way).

Usually the headless service is OK. No need to add latency though a proxy.
Note that you can also use the YugabyteDB smart drivers which will get the list of nodes from the initial connection and do the round-robin itself

Yes, I am using the smart drivers already. So there is client-side load balancing in place and I suppose for those few requests to occasionally update the list of nodes it really doesn’t matter whether there is DNS round robin or K8s load balancing.

1 Like