Smart Flow Control limits
One of the settings on a Connection Reuse template is "Smart Flow Control", which lets you queue packets when the per-server connection limit is reached (otherwise packets that would exceed the limit are dropped). The queue depth is configurable, but then, if the queue is full, packets are dropped. I realize that if you let the queue get very large, it will result in some delay for each new connection as it waits through the queue and is then handled, and you're still vulnerable to a sufficiently large DoS. That said, suppose responding reliably but slowly is acceptable behavior for the application under an anticipated, legitimate surge. Is there any downside to setting the queue depth to some very large number? Any implicit limit, or any particular resources that will be depleted by a large queue?
0
Comments