Options

Upgrade from 2.4.x to 2.6.x how-to?

criisercriiser Member
Looking at the release notes etc everything looks fine and dandy - But the more advanced paths are not that well documented IMHO.

So. Does anyone have experience with transitioning from HA to VRRP-A. How did that go? What to REALLY not forget?

And also in the same fashion. Have an running system -> migrating it into an aVCS private partition? Experiences? Difficulties?

Kind regards,
Christian

Comments

  • Options
    dshindshin Member
    edited February 2014
    Hi Criiser,

    A. The AX has a migration path command that you can use to automatically transition from HA to VRRP-A setup. There are some limitations so please read limitations before upgrading.
    1. Upgrade 2.4 to 2.6

    Note : Prior to Upgrade :

    Vrrp-a does not support the following commands. If the following commands are detected in the existing configs, the migration process will stop. Please be aware that VRRP cannot be deployed within inline or l3 inline mode.

    Note: remove the following command for the migration to work.
    -HA inline-mode
    -HA l3-inline-mode
    -HA restart-port-list (only for ha inline mode)
    -HA restart-time (only for ha inline mode)
    -HA ospf-inline (only for ha l3-line mode)
    -RBA Partition
    -Forward-L4-Packet-on-Standby
    -HA Parameter for Real Servers/Ports(ha priority cost)
    -FWLB

    2. After HA to VRRP-A upgrade: The following changes will happen in the existing configs.

    • “vrid default” will be disabled
    • ha conn-mirror ip x.x.x.x is not used in vrrp-a. The peer ip is automatically learned
    • port tracking is now based on priority for vrrp. No server interface/router interface tracking as old HA does.

    We recommend that the AX configs must be backed up before migrating.

    3. The commands to issue to upgrade from HA to VRRP-A are “vrrp-a ha-migration” and “vrrp-a ha-migration-restore”

    B. For VCS upgrade
    Upgrade AX (Note: With the assumption that the HA ID and Set ID are configured)
    1. Upgrade the AXs to 2.4 to 2.6 release.
    2. “Enable vcs” on all boxes.
    3. Go to config mode and configure device ID
    4. choose “vcs interface” and “enable”.
    5. “VCS reload”

    If you need detailed Config examples as to how to upgrade refer to our System Admin Guide on Page 145 onwards.
  • Options
    criisercriiser Member
    edited February 2014
    Much appreciated!!!

    Some more TRICKS to save for future reference - gathered together with Mr. Peters -

    If you are going to use Network partitioning. Fix the VE / VLAN / Router interface - PRIOR enabling VCS! :)

    IF changing VCS to use other interface than Management. Make sure the sh vcs sum says that its using the new ones prior reboot/s or what ever you plan to to :)

    IF addning floating IP to VCS - It does NOT get activated until you do an vcs reload...

    All this is ofc on 2.6.1-P4(build: 54)

    Needless to say. FINALLY! We got Network paritioning.!! We Got VCS !!!! a big YESS and thank you from me!!!

    /Christian
  • Options
    cwoodfieldcwoodfield Member
    edited February 2014
    How much downtime was seen during this upgrade? Is it possible to do a 2.4 to 2.6 upgrade in a way that still leaves the VIPs available? What about enabling AVCS?
  • Options
    criisercriiser Member
    edited February 2014
    To be honsest - our small downtime max 5 minutes of the VIPs was due to the noted things. If properly done the migration to VCS will be painfree...

    I asked A10 to assist in the migration as this transition is by far something I will rarely do many of...

    Br, Christian
  • Options
    cwoodfieldcwoodfield Member
    edited February 2014
    We successfully performed this upgrade on a production LB pair yesterday. Downtime was less than 5 seconds, although connections needed to be reestablished (this is expected). The basic procedure looked like this:

    - Isolate Standby unit (shut down inside/outside uplinks and HA interfaces).
    - Upgrade Standby unit to 2.6 code
    - Remove HA config syntax, replace with vrrp-a syntax (We only had one ha-group, which made this pretty simple)
    - Set "vrrp-a force-self-standby"
    - Turn inside/outside uplinks back up, verify VIPs passing healthchecks
    - Shut down uplinks to Active unit, then remove vrrp-a force-self-standby on the Standby quickly afterwards. If you time this properly, downtime will be minimal (I saw connection failures for about 3-4 seconds). The formerly-Standby unit will become Active for vrid 1.
    - Once the formerly-Active unit is isolated, peform the same upgrade procedure. However, set the vrrp-a priority for all vrids lower than the current Active's priority values.
    - Once vrrp-a is configured, bring back up the HA and uplinks to device. Confirm VIPs passing healthchecks and unit getting connecting sync info from Active if you have ha-conn-mirror enabled.
    - Raise vrrp-a priorities to pre-maintenance values, restoring this device to Active status.
    - Pour yourself a beer.

    In our case, the HA to vrrp-a conversion process took the following syntax. One thing to remember is that the AX will not accept "no ha-group X" until all other references to that HA group have been removed (we only had a single ha-group configured; YMMV if you have multiple ha-groups set up):


    For each SLB virtual server:

    slb virtual server $VIRTUAL
    no ha-group 1

    For each ip nat:
    no ip nat $NATPOOL_NAME ha-group 1

    For each floating IP:
    no floating-ip $FLOATING_IP ha-group 1

    Then:
    no ha preemption_enable
    no ha preemption_enable
    no ha group 1
    no ha id $DEV_ID

    vrrp-a set-id 1
    vrrp-a device-id $DEV_ID
    vrrp-a disable-default-vrid
    vrrp-a interface $HA_INTERFACE_NAME
    ha conn-mirror-ip $REMOTE_HA_IP
    vrrp-a vrid 1
    floating-ip $FLOATING_IP # (repeat for each removed floating-ip statement)
    priority $PRIORITY (100 for Standby, 250 for Active)
    tracking-options
    trunk 1 priority-cost 200
    trunk 2 priority-cost 200
    !
    vrrp-a enable

    Then for each SLB virtual server:
    slb virtual server $VIRTUAL
    vrid 1

    Then for each NAT:
    ip nat pool $NATPOOLNAME vrid 1


    Hope this helps!
Sign In or Register to comment.