le ven 25-10-2002 ŕ 10:25, Niels Möller a écrit : > Sure, but do you really want them to run fully separately? It's common > (although not required by any standard, afaik), that an ipv6 socket > bound to the ipv6 wildcard interface should be able to accept ipv4 > connections. In my opinion, we can bind on a port on a single network address. We can have a "binding interface" which will allow to bind an union of ports. If we want to bind only one interface, the union will be composed of one interface, if we want to bind on all interfaces, the union will be composed of all interfaces. All binding combinations will be aviable. bsd allows to choose on which interface(s) we want to bind, this is a nice feature. Then, it is the responsability of human to decide on which interface(s) the program will bind.
> arp is a lot different from icmp. sure. > arp is a link-layer mechanism that > only the ip-over-ethernet code should need to worry about. I don't agree. According to RFCs, arp is a generic protocol wich allows many layer 3 protocols to find data link addresses. It is not limited to ethernet/ip. This is an important feature, and we should think about the best way to use it. You give me an idea (thanks) : arp has to know each layer 2 interface (maybe they will have a way to register themselves) and all layer 3 translators will use it. In this way, maybe _one_ non-replaceable arp translator could run in the system, it would communicate with layer 2 translators, and will give a new interface to layer 3 protocols that look for a data link address. This method can solve the "issue" caused by the existence of 3 protocols in the same layer (ip icmp arp). In this way, icmp could be integrated to ip, and it (ip+icmp) will offer an other interface on top of the ip translator for layer 4 protocols who want to use it (icmp features). We can also imagine that the arp translator does a little more things. Layer 3 protocols could also register themselves to arp, so, it would allow layer 2 translators to ask the arp translator to which layer 3 translator they should send data they receive. In that way, all layer 3 protocols will can receive/send data on each physical interface. In a final step, we can can extend it by integrating arp in the layer 2 protocols. The problem is that all layer 3 registers will be duplicated in each layer 2 protocols. But it can be a good way for layer 3 protocols to choose which interfaces they will work with (they don't have to register on each layer 2 translator). This will avoid to have too much (layer 2) <-> (arp) <-> (layer 3) rpc calls. > But in > order to do tcp, tcp, ip and icmp are used together. I'm all for code > separation, but I'm afraid that putting the implementations into > separate *processes* will just add unnecessary complexity to the > system, for no real gain. This is a complex problem. We can offer nice features, but where do we stop ? Which feature are more important than efficiency, and when must we privilege efficiency against features ? > I don't buy this. By adding ownership information and access control > into hurd-net, you make things a lot more complex than they need be. > You'll likely end up with something a lot more complex than the > current pfinet. A guiding principle is that anything that can't be > easily replaced by users should be as small as possible. I know that hurd-net has issues, this idea is a way to hide the fact that hurd-net is not replaceable. User's won't replace it, but it offers them ways of doing nice things, these things that they would have done if they had been allowed to replace hurd-het... > Also note that the current pfinet *can* be replaced by users, the only > real problem is that the ethernet device can't deal with that, so > you'll need a separate ethernet card for each pfinet. So a hurd-net > server that can't be replaced would actually be a step backwards. I know this, see the beginnig of this mail, there is (in my opinion) a way to allow many layer 3 protocols translators to use the same layer 2 interface. [...] I agree with what you said, my proposal does not prevent users to do this (if you understood the opposite, that's because I didn't explain it well). > The more I think about it, the cooler the network-in-user-process > model seems. A process that creates a socket would talk to one or more > pfinets. Each pfinet offers a directory of interfaces. And if I bind > to the wildcard interface, my process will simply open each interface, > and ask for dir-notification on the directories so that it can pick up > any new interfaces that show up. The hard problem in making this work > is to define the interface-interface in such a way that different > users can't mess up eachother's connections, allocate the same port > numbers, etc. That's probably non-trivial. I thought about these problems, and that's why I proposed to have a "central" translator to solve this... I will think a lot about it this week end, I will read code and books, and maybe I'll can propose an updated version of my point of view about network re-implementation (maybe hurd-net will change a lot, because it was an idea to solve problems, but it also creates many problems. I have new ideas to solve some of these problems in an other way than using hurd-net). olivier _______________________________________________ Bug-hurd mailing list [EMAIL PROTECTED] http://mail.gnu.org/mailman/listinfo/bug-hurd