Options

VRRP-A and aVCS Configuration

a1m0ntea1m0nte Member

Hello, I'm looking for some advice on a pair of AX1030's configured with VRRP-A and aVCS to be deployed.

The cluster is configured to use VRRP-A and aVCS on interface Ethernet6. The devices appear to be working according to show vrrp-a and show vcs summary. These are the devices.

  • AX1-­Active-vMaster[1/1]
  • AX2-­Standby‐vBlade[1/2]

However, I notice when I do a reboot, both devices reboot, but AX2 gives the following message.

AX2 a10logd: [VCS]<3> Failed to establish connection with vMaster: timeout,NA.

I have to manually disable and enable VRRP-A and aVCS on AX2 which might take multiple tries to work.

Is this expected behavior, or what settings should I check to make sure it is correctly setup? Thank you.

Comments

  • Options
    john_allenjohn_allen Member

    This sounds more like a network issue that a config issue....especially since it eventually does work as configured. Is the network on Eth 6 stand-alone, or is it being shared with other nodes?? Is it going through a switch of some sort?? Usually this is just a direct cable connection between the two nodes. This could also be a timing issue, as the Master is most likely taking more time to come up and get everything in the configuration going.

  • Options
    a1m0ntea1m0nte Member

    Thanks for the input. Eth6 uses a standalone network directly connected between the two nodes. At the moment the nodes are not connected to another network. I did try to enable the aVCS option 'Failure Retry Count Forever' which unfortunately does not resolve. I'd have to do a vcs reload on one or both nodes for it to work. Forgot to mention I'm running 4.1.4-p13. I'm still trying a few things but might be something I'm missing. Thanks again.

  • Options
    mdunnmdunn Member ✭✭

    This is not expected, and both VRRP and VCS should come up automatically. Let's check the status and configuration of both:

    show vrrp-a
    show vcs summary
    show run vrrp-a
    show run vcs
    


  • Options
    a1m0ntea1m0nte Member

    Thank you for checking. Here's the output from those commands.

    This was run from the standby/blade node since that is what I had connected by console.

    The floating IP has been changed but represents an external IP we use which is the gateway to the next hop.

    AX2-Standby-vBlade[1/2]#show vrrp-a
    vrid 0
    Unit      State      Weight     Priority
    2 (Local)    Standby     65534      150           *
            became Standby at: Mar 12 07:05:19 2024
                 for 0 Day, 8 Hour,11 min
    1 (Peer)    Active     65534      255
    vrid that is running: 0
    


    AX2-Standby-vBlade[1/2]#show vcs summary
    
    VCS Chassis:
      VCS Enabled:                Yes
      Chassis ID:                1
      Floating IP:                7.7.7.1
      Mask:                   255.255.255.0
      Multicast IP:               224.0.0.210
      Multicast Port:              41217
      Version:                  4.1.4-GR1-P13.b44
    
    Members(* means local device):
     ID State    Priority IP:Port                    Location
    -------------------------------------------------------------------------------
     1  vMaster   200   172.16.1.1:41216               Remote
     2  vBlade(*)  199   172.16.1.2:41216               Local
    Total: 2
    


    AX2-Standby-vBlade[1/2]#show run vrrp-a
    !Section configuration: 458 bytes
    !
    vrrp-a common
     device-id 2
     set-id 1
     enable
    !
    vrrp-a vrid 0
     floating-ip 7.7.7.1
     device-context 1
      blade-parameters
       priority 255
     device-context 2
      blade-parameters
       priority 150
    !
    vrrp-a interface ethernet 1/6
    !
    vrrp-a interface ethernet 2/6
    !
    vrrp-a preferred-session-sync-port ethernet 1/6
    !
    device-context 1
     vrrp-a peer-group
      peer 172.16.1.2
    !
    device-context 2
     vrrp-a peer-group
      peer 172.16.1.1
    


    AX2-Standby-vBlade[1/2]#show run vcs
    !Section configuration: 317 bytes
    !
    device-context 1
     vcs enable
    !
    device-context 2
     vcs enable
    !
    vcs floating-ip 7.7.7.1 255.255.255.0
    vcs failure-retry-count forever
    !
    vcs device 1
     priority 200
     interfaces ve 50
     interfaces ethernet 6
     enable
    !
    vcs device 2
     priority 199
     interfaces ve 50
     interfaces ethernet 6
     enable
    !
    
  • Options
    mdunnmdunn Member ✭✭

    Thanks for the information. Can you also provide the output of "show run interface"?

    There are a couple things that standout to me:

    1. VRRP is configured for unicast communication. This is the "peer-group" config. In such a configuration, the peer-group should contain both devices' IP address. If unicast is not required, peer-group may be removed and multicast will be used. This may be causing some issues.
    2. VRRP Preferred session sync port is not defined on 2/6. Probably not related to your issue, but cleans the config up a bit.
  • Options
    a1m0ntea1m0nte Member

    Thanks for the input. I'll take a look at the peer-group config and the preferred session sync port.

    AX2-Standby-vBlade[1/2]#show run interface
    !Section configuration: 792 bytes
    !
    device-context 1
     interface management
      flow-control
      ip address 172.31.31.31 255.255.255.0
    !
    interface ethernet 1/1
     enable
    !
    interface ethernet 1/2
    !
    interface ethernet 1/3
    !
    interface ethernet 1/4
    !
    interface ethernet 1/5
    !
    interface ethernet 1/6
     enable
    !
    interface ethernet 1/7
    !
    interface ethernet 1/8
    !
    interface ethernet 2/1
    !
    interface ethernet 2/2
    !
    interface ethernet 2/3
    !
    interface ethernet 2/4
    !
    interface ethernet 2/5
    !
    interface ethernet 2/6
     enable
    !
    interface ethernet 2/7
    !
    interface ethernet 2/8
    !
    interface ve 1/10
     ip address 7.7.7.8 255.255.255.0
    !
    interface ve 1/50
     ip address 172.16.1.1 255.255.255.0
    !
    interface ve 2/10
     ip address 7.7.7.9 255.255.255.0
    !
    interface ve 2/50
     ip address 172.16.1.2 255.255.255.0
    !
    
  • Options
    a1m0ntea1m0nte Member

    Hello @mdunn I removed the members of the peer-group config, added the preferred sync port for 2/6, and tested rebooting the nodes. It looks like both nodes come back online successfully with VRRP-A and aVCS. I noticed it takes about 10 minutes for both nodes to sync, but I recall @john_allen mentioned time may vary based on how long the Master takes to come online, so that's just par for the course. Thanks very much to you both. Now onto further configuration for actual load balancing services.

Sign In or Register to comment.