I don't quite follow. If by resources you mean process related resources
than I would agree. My very first comment didn't have anything to do
with process related resources. And for the TCP related resources I
maintain that the amount of overhead in Plan9's case is definitely
comparable to a Linux's case.
The key phrase is: transience versus persistence. From the very beginning
NAT could be implemented using persistent connections but it wasn't because
it had a problem at hand that would not yield to that solution. In fact,
there exist methods of "NATting" with exactly that approach--namely VPN,
PPTP, L2TP, and any "tunneling" application like GNU's httptunnel--but for
obvious reasons they haven't supplanted NAT.
According to the RFC that describes it NAT was adopted primarily to address
the "temporary" scarcity of IP addresses (before IPv6 took on, and it
didn't, how many IPv6 connections do you make outside of your organization
in a day). It was expected that large organizations without Class A ranges
would soon want one--GM had one, why not them?--and nearly all Class A's
were already allocated. NAT was invented so that late-coming organizations
could share _one_ class A range (192.x.x.x and any one of the others).
Every sensible NAT solution must be implemented with that in mind--not that
existing ones have been. Even imagining persistent connections from an
entire Class A network makes one shudder. Needless to say, the wreak of
havoc occurs _long_ before over 16 million hosts need persistent
connections.
The costliness of this type of persistence must be understood with regard
to the fact that most hosts on most organizations' internal networks
rarely, but not never, access the outside world and when they do they only
need some traffic routed to and fro not an entire copy of protocol stack
reserved for them in the gateway's memory. The /net-import doesn't scale
(well or at all?) because a side-effect of generality is lack of
granularity.
--On Saturday, November 15, 2008 9:47 PM -0800 Roman Shaposhnik
<[EMAIL PROTECTED]> wrote:
On Nov 15, 2008, at 2:13 PM, Micah Stetson wrote:
I'm unclear as to what "amount of state" iptables needs to keep
After you do something like:
# iptables -t nat -A POSTROUTING -p TCP -j MASQUERADE
the Linux kernel module called nf_conntrack starts allocating
data structures to do its job. I'll leave it up to you to see how
much
memory gets wasted on each connection. Here's a hint,
though: /proc/net/nf_conntrack
I don't think Plan 9 is keeping any less state, is it?
Not really, no. My point was that the amount of state in a typical
Linux-based NAT box was quite comparable and thus couldn't
be used to bash Plan9's approach as being visibly less efficient
as far as TCP overhead goes.
Plan 9 does need one extra connection per client and a process (or
two?) to do the export.
Yes it does need one extra connection for /net to be imported. Depending
on the setup that extra connection could be reduced to one per host
importing the /net. I specifically didn't address the point of extra
processes running on the GW simply because I agree -- there's a price
there that Linux doesn't pay (although as I've learned from Bruce
Inferno has reduced the price for running identical processes quite
significantly by implementing silent page sharing).
I think Eris is saying that this makes Plan
9's resource requirements grow with the number of hosts behind the
gateway -- not just with the number of connections through it like
Linux.
I don't quite follow. If by resources you mean process related resources
than I would agree. My very first comment didn't have anything to do
with process related resources. And for the TCP related resources I
maintain that the amount of overhead in Plan9's case is definitely
comparable to a Linux's case.
Thanks,
Roman.