David,

> To drive this home even more, I do not believe that the people who
> advocate pushing NDISC and ARP policy into userspace would be very
> happy if something like the RAID transformations were moved into
> userspace and they were not able to access their disks if the RAID
> transformer process in userspace died.

   How would you restart the RAID controller daemon if it's stored
 in the RAID itself? Also, assuming the same code quality (and ignoring
 OOM killer for a moment), if the RAID controller daemon dies, if that
 code was in the kernel, it would also possibly crash the whole kernel.

> At that point, network access equals disk access.  It would be amusing
> to need to restart such an NDISC/ARP daemon if it were to live on a
> remote volume. :-)

   What you are saying is that, well, the NDISC handling is already in
 the host's memory (kernel text), so the connection could be restarted
 with the remote storage facility. So, let's be fair, and say that
 somehow the NDISC daemon would be available localy?

> I understand full well that on special purpose network devices this
> control vs. data plane seperation into userspace might make a lot of
> sense.  But for a general purpose operating system, such as Linux, the
> greater concern is resiliency to failures and each piece of core
> functionality you move to userspace is a new potential point of
> failure.

   I think 100% of Linux's users want stability. Resiliency to failure
 is not something that depends on the kernel. If the code in question is
 in the kernel, and it crashes, how will you recover?

   Please note that i'm not making this a monolithic vs. micro- kernel
 discussion (i wouldn't want Linus to step in and kick me to hell), but
 if we have the possibility of not having _complex_ interactions within
 the kernel, we are making the kernel itself more resilient to failure.

   Hugo

Attachment: signature.asc
Description: Digital signature

Reply via email to