Hello, I’m looking for some advice on a pair of AX1030’s configured with VRRP-A and aVCS to be deployed.
The cluster is configured to use VRRP-A and aVCS on interface Ethernet6. The devices appear to be working according to show vrrp-a and show vcs summary. These are the devices.
AX1-Active-vMaster[1/1]
AX2-Standby‐vBlade[1/2]
However, I notice when I do a reboot, both devices reboot, but AX2 gives the following message.
I have to manually disable and enable VRRP-A and aVCS on AX2 which might take multiple tries to work.
Is this expected behavior, or what settings should I check to make sure it is correctly setup? Thank you.
This sounds more like a network issue that a config issue…especially since it eventually does work as configured. Is the network on Eth 6 stand-alone, or is it being shared with other nodes?? Is it going through a switch of some sort?? Usually this is just a direct cable connection between the two nodes. This could also be a timing issue, as the Master is most likely taking more time to come up and get everything in the configuration going.
Thanks for the input. Eth6 uses a standalone network directly connected between the two nodes. At the moment the nodes are not connected to another network. I did try to enable the aVCS option ‘Failure Retry Count Forever’ which unfortunately does not resolve. I’d have to do a vcs reload on one or both nodes for it to work. Forgot to mention I’m running 4.1.4-p13. I’m still trying a few things but might be something I’m missing. Thanks again.
Thank you for checking. Here’s the output from those commands.
This was run from the standby/blade node since that is what I had connected by console.
The floating IP has been changed but represents an external IP we use which is the gateway to the next hop.
AX2-Standby-vBlade[1/2]#show vrrp-a
vrid 0
Unit State Weight Priority
2 (Local) Standby 65534 150 *
became Standby at: Mar 12 07:05:19 2024
for 0 Day, 8 Hour,11 min
1 (Peer) Active 65534 255
vrid that is running: 0
Members(* means local device):
ID State Priority IP:Port Location
-------------------------------------------------------------------------------
1 vMaster 200 172.16.1.1:41216 Remote
2 vBlade(*) 199 172.16.1.2:41216 Local
Total: 2
Thanks for the information. Can you also provide the output of “show run interface”?
There are a couple things that standout to me:
VRRP is configured for unicast communication. This is the “peer-group” config. In such a configuration, the peer-group should contain both devices’ IP address. If unicast is not required, peer-group may be removed and multicast will be used. This may be causing some issues.
VRRP Preferred session sync port is not defined on 2/6. Probably not related to your issue, but cleans the config up a bit.
Hello I removed the members of the peer-group config, added the preferred sync port for 2/6, and tested rebooting the nodes. It looks like both nodes come back online successfully with VRRP-A and aVCS. I noticed it takes about 10 minutes for both nodes to sync, but I recall mentioned time may vary based on how long the Master takes to come online, so that’s just par for the course. Thanks very much to you both. Now onto further configuration for actual load balancing services.