GSLB Sticky when clients use multiple DNS servers

Hi -

I hope you can help me with this situation. It seems it would be pretty common.

We have a particular load balanced internal/external application with a 43 minute timeout. We have two SLB devices both serving this application in 2 datacenters. In front of that we have GSLB configured with a 60 minute sticky DNS policy to ensure that users querying for the application are always routed to the correct SLB.
However, we are finding that on our internal network users are commonly losing session state and getting logged out of the application. Upon further investigation, we have realized this is because of our internal DNS setup. We have two DNS servers and the users can float between them. When they query "" the DNS response is actually a CNAME request ( to a delegation subzone. This forwards the request to our GSLB device to route the request to the proper SLB device. We have 5 minute TTL set on these CNAME records to ensure client requests are re-routed to a different SLB quickly should one fail or become unreachable. The problem is that because of the random nature of the way our clients request DNS entries (they can query either of the two DNS servers) they can randomly appear to have a source of one DNS server or the other, which could have a "sticky" session to opposing SLB devices. This would mean each time or randomly when they make a DNS request, their session moves to another SLB device and they get logged out of the application. Is there a way to prevent this from the A10 perspective, or a better design that would help stop the issue?
Sign In or Register to comment.