Hi -
This is probably a stupid question, but I have a configuration of master > slave for our GSLB configuration. We do both GSLB and SLB on each device (different datacenters) as most people do.
We do not do any health monitoring at the GSLB level specifically. We want to rely on the up/down status of the VIP itself. Here is our example configuration:
DR SLB:
slb server test 1.2.3.4 health-check-disable port 80 tcp exit slb service-group test-80 tcp member test 80 exit exit slb virtual-server testdr-vip 11.11.11.11 port 80 http service-group test-80 source-nat auto exit
Production SLB: slb server test 1.2.3.4 health-check-disable port 80 tcp exit slb service-group test-80 tcp member test 80 exit exit slb virtual-server testpd-vip 10.10.10.11 port 80 http service-group test-80 source-nat auto exit gslb service-ip testpd-vip 10.10.10.11 port 80 tcp exit gslb service-ip testpd-vip 11.11.11.11 port 80 tcp exit gslb zone test.dns.com service 53 test (<-- we aren’t doing any special monitoring at GSLb level so we just use 53 to signify DNS) dns-a-record testpd-vip static dns-a-record testdr-vip static exit
Now what we have configured is two different SLB/GSLB nodes, each in different datacenters, running two different VIPs. I have the GSLB on my master server (production) with the service-ips configured and the service records. I see that this data is replicated to the GSLB slave device.
I also see that if I lose the production VIP for some reason the service-ip also goes down. This is by design. HOWEVER, if the DR VIP goes down, the master GSLB server never detects it. Strangely this behavior is only seen on our DMZ environment, so I suspect there may be a firewall issue in play.
In our internal environment the behavior is that the VIP is disabled both ways, as it should be.
Would there be a communications requirement between the DMZs for this to work? The production GSLB can contact DR but that is not true in reverse.