On 2021-06-06, Patrick Dohman <dohmanpatr...@gmail.com> wrote:
> Perhaps it has something to do with Citrix being a dinosaur.
> God forbid the powers that be choose on premise unix.
> Regards
> Patrick

Your message doesn't appear to relate in any way to the message to which you're 
replying.


>> On Jun 4, 2021, at 6:43 AM, Stuart Henderson <s...@spacehopper.org> wrote:
>> 
>> On 2021/06/03 15:04, Chris Cappuccio wrote:
>>> Stuart Henderson [s...@spacehopper.org] wrote:
>>>> 
>>>> Oh watch out with sloppy. Keep an eye on your state table size.
>>> 
>>> Really? Wouldn't sloppy keep the state table smaller if anything since it's 
>>> tracking less specifically?
>>> 
>>> Anyways I use sloppy across four boxes that run in parallel with pfsync. 
>>> There could easily be 10,000 devices behind it at any given time. I keep my 
>>> state table limit at 1,000,000. It's around 300,000 during this lighter 
>>> traffic period today. I had to do sloppy after moving to several boxes in 
>>> parallel, I didn't notice sloppy making any significant difference?
>>> 
>>> Chris
>> 
>> The problem I had was in conjunction with synfloods. I didn't get
>> captures for everything to figure it out (it was in 2018 and my
>> network was in flames, with the full state table bgp sessions were
>> getting dropped / not reestablishing) but I think what happened was
>> this,
>> 
>> spoofed SYN to real server behind PF
>> SYN+ACK from server
>> 
>> and the state entry ended up as ESTABLISHED:ESTABLISHED where it
>> remained until the tcp.established timer expired (24h default
>> or 5h with "set optimization aggressive").
>> 
>> My "fix" was to move as much as possible to "pass XX flags any no state"
>> but that's clearly not going to help with what Denis would like to do.
>> (fwiw - I'm not doing flow monitoring regularly, but when I do it's
>> usually via sflow on switches instead, which solves some problems,
>> though it's only possible in some situations).
>> 
>
>

Reply via email to