Master Admin UI Cannot get Leader information to help you redirect

#1

Browsing to the admin master interface (port 7000) of a follower throws the following error (see image):

Sounds like I don’t have something configured right, any idea where I can configure the redirect it is stating.

#2

Sounds like yb-master leader is not reachable or in a stuck state somehow, rather than a configuration error. The yb-master logs would be helpful.

Can you share which release? replication factor? and any other information about your setup.

If this is urgent, feel free to join the slack channel (www.yugabyte.com/slack) and we can try to work through your issue.

#3

I’m using version 1.2.6.0, it is a 3 node cluster with default replication factor 3. Each node is running a yb-master and yb-tserver process.

I went through the process described at https://docs.yugabyte.com/latest/deploy/manual-deployment/

I can get to the Admin UI on the leader node but not the followers.

I’ll try to dig through the log see if I can find anything useful.

#4

I looked through the yb-master info and warning log and I see the following, could it have something to do with it:

Term 2 pre-election: RPC error from VoteRequest() call to peer 388efcb573034935b6585c55bd336c8a: Network error (yb/util/net/socket.cc:590): recvmsg error: No route to host (error 113)

This is from a follower node, the peer it is referring to is the leader node.

It doesn’t show the IP it is trying to connect it doesn’t say in the log, the route between the two nodes is fine, I can ping between the two. I also turned off firewalls between the two and don’t think it could be a firewall issue at the moment.

#5

Doesn’t look like my previous post is the issue, that was during when the boxes are still booting up. I can see later they are communicating and picked a leader. Anything else I can look for in the logs ?

#6

Hi @Exocomp - This could happen during transit when the master raft election is in progress. But if this happens at steady state after cluster restart, will need to debug further…

Some other questions/options:

  • Does the UI show this issue for each of the three master ips?
  • On each of the master nodes, could you run
    ./bin/yb-admin -master_addresses IP_1:7100,IP_2:7100,IP_3:7100 list_all_masters
    where IP_X is each master node. And provide the output here, if possible? This will help check the master quorum status.
  • If you could file a new github issue and upload the master logs, we can also take a further look.

Thanks!

#7

Hi @bharat

  • It works for the leader but not the followers
  • Here is the output of the masters list ./bin/yb-admin -master_addresses 10.0.0.2:7100,10.0.0.3:7100,10.0.0.4:7100 list_all_masters I0507 21:57:27.597533 11143 mem_tracker.cc:241] MemTracker: hard memory limit is 10.632786 GB I0507 21:57:27.597642 11143 mem_tracker.cc:243] MemTracker: soft memory limit is 10.387868 GB Master UUID RPC Host/Port State Role d1cb1dcf961b4e7f92f3bb038bae012b 10.0.0.2:7100 ALIVE LEADER 388efcb573034935b6585c55bd336c8a 10.0.0.3:7100 ALIVE FOLLOWER a4516560b1a158107459d865bc92c44a 10.0.0.4:7100 ALIVE FOLLOWER
1 Like
#8

@bharat

Problem resolved. Apparent the curl call that gets the leader info uses the host name instead of IP and the host names were not being resolved correctly. The installation doc states IP is preferred so thats the reason the host names were not setup correctly.

2 Likes