On Wed, Nov 12, 2008 at 10:23 PM, Eris Discordia
<[EMAIL PROTECTED]> wrote:
> First off, thank you so much, sqweek. When someone on 9fans tries to put
> things in terms of basic abstract ideas instead of technical ones I really
> appreciate it--I actually learn something.

 Welcome, but don't mistake me for someone having the background and
experience with plan 9 to comment with any sort of authority.

>>  It doesn't stop at 9p:
>> * plan 9 doesn't need to bother with NAT, since you can just
>> import /net from your gateway.
>
> I understand that if you import a gateway's /net on each computer in a
> rather large internal network you will be consuming a huge amount of mostly
> redundant resources on the gateway. My impression is that each imported
> instance of /net requires a persistent session to be established between the
> gateway and the host on the internal network. NAT in comparison is naturally
> transient.

 I'm not sure there's as much difference as you make out to be. On the
one hand, you have a NAT gateway listening for tcp/ip packets, and on
the other hand you have an open tcp/ip connection and a file server
waiting for 9p requests. It's not as though 9p is wasting bandwidth
chatting away while there's no activity, so the only cost is the
tcp/ip connection to each client on the network, which shouldn't
qualify as a huge amount of resources.
 If it does, you have the same problem with any service you want to
provide to the whole network, so the techniques you use to solve it
there can be applied to the gateway. So *maybe* you could get away
with a weaker machine serving NAT instead of /net, but it would come
at a cost (the sort of cost that are hard to recognise as costs
because we're so used to them. every time i remind myself that /net
removes the need for port forwarding i get shivers).

> With an imported /net since there's no packet rewriting implemented
> on the network layer (e.g. IP) and because the "redirection" occurs in the
> application layer there's no chance of capturing spoofed packets except with
> hacking what makes /net tick (the kernel?).

 What makes /net tick depends on what you export on /net. The kernel
serves your basic /net, yes, but there's nothing to stop you having a
userspace file server on top of that to do whatever filtering you
like.

> Does that mean a new design "from scratch" is
> always bound to be better suited to current constraints?

 It very often is in my experience. But it's also very easy to leave
something important out of "current constraints" when designing from
scratch, or ignore the lessons learned by the previous iteration.

> Also, if you think
> UNIX and clones are flawed today because of their history and origins what
> makes you think Plan 9 doesn't suffer from "diseases" it contracted from its
> original birthplace? I can't forget the Jukebox example.

 UNIX seems to have coped with changing constraints by bolting more
and more junk on the side...
* "Whoa, here comes a network, we're going to need some more syscalls!"
* "Non-english languages? Better rig up some new codepages!"

> As pointed out previously on this same thread,
> in jest though, the Linux community always finds a way to exhaust a
> (consistent) model's options, and then come the extensions and "hacks."
> That, however, is not the Linux community's fault--it's the nature of an
> advancing science and technology that always welcomes not only new users but
> also new types of users and entirely novel use cases.

 It's a matter of approach. Linux takes what I like to call the cookie
monster approach, which is MORE MORE MORE. More syscalls, more ioctls,
more program flags, more layers of indirection, a constant enumeration
of every use case. Rarely is there a pause to check whether several
use cases can be coalesced into a single more general way of doing
things, or to consider whether the feature could be better implemented
elsewhere in the system. This has a tendency to disrupt conceptual
integrity, which hastens the above process.

 These days the dearth of developers make it difficult to distinguish
any daring developments on plan 9, but during the decades a different
derivation has been demonstrated.
 *ahem* Plan 9 seems to have more of a tendency to adapt. I'm sure the
adoption of utf-8 and the switch from 9p to 9p2000 aren't the only
examples of system-wide changes. The early labs history is rife with
stories of folk joining and trying out all sorts of new stuff. The
system feels like it has a more experimental nature - things that
don't work get dropped and lessons are learned from mistakes. Which is
sadly somewhat rare in software.

 More to the point, I'm yet to see a richer set of abstractions come
out of another system. Private namespaces, resources as files... they
might be ancient ideas, but everyone else is still playing catch up.
They might not be the ultimate ideal, but if we push them far enough
we might learn something.

>> The problem is it forces the server and client to synchronise on every
>> read/write syscall, which results in terrible bandwidth utilisation.
>
> An example of a disease contracted from a system's birthplace.

 You're pretty quick to to describe it as a disease. Plan 9 learned a
lot from the mistakes of UNIX, but the base syscalls are something
that stuck. I wouldn't expect that to happen without good reason.

 'scuse me if I'm silent for awhile, I've been spending too much time
pondering and it's starting to affect my work. *9fans breathes a sigh
of relief*
-sqweek

Reply via email to