Hello,
I was running an experiment with pf in which I encountered an unusual case.
In a nat setup, is this okay to have multiple similar entries in source
tracking table?
# pfctl -sS
192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
There are actually three similar binding stuck in source tracking table.
vmstat output also confirms separate memory allocation for three entries in
source tracking table:
# vmstat -z | egrep 'ITEM|^pf'
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
pf mtags: 48, 0, 0, 0, 0, 0, 0
pf states: 296, 8000005, 0, 1313, 2279, 0, 0
pf state keys: 88, 0, 0, 2655, 4558, 0, 0
pf source nodes: 136, 1500025, 3, 142, 7, 0, 0
pf table entries: 160, 800000, 4, 121, 47, 0, 0
pf table counters: 64, 0, 0, 0, 0, 0, 0
pf frags: 112, 0, 0, 0, 0, 0, 0
pf frag entries: 40, 100000, 0, 0, 0, 0, 0
pf state scrubs: 40, 0, 0, 0, 0, 0, 0
I can reproduce this behavior by reloading pf.conf and running traffic through
the box and get a new entry added to source tracking table.
Here is the nat rule:
# pfctl -vsn
nat on em0 inet from <internal-net> to any -> <external-net> round-robin
sticky-address
[ Evaluations: 368 Packets: 50 Bytes: 2084 States: 0
]
[ Inserted: uid 0 pid 6418 State Creations: 28 ]
and timers:
# pfctl -st
tcp.first 10s
tcp.opening 10s
tcp.established 4200s
tcp.closing 10s
tcp.finwait 15s
tcp.closed 10s
tcp.tsdiff 30s
udp.first 60s
udp.single 30s
udp.multiple 60s
icmp.first 20s
icmp.error 10s
other.first 60s
other.single 30s
other.multiple 60s
frag 30s
interval 30s
adaptive.start 0 states
adaptive.end 0 states
src.track 3600s
Any ideas if this behavior is expected?
—
Babak