On 10/11/2014 12:49 PM, Andrei POPESCU wrote:
On Sb, 11 oct 14, 12:19:29, Marty wrote:

>Could it be that a modular design for such complex tasks becomes too
>difficult to *do it right*?

I don't know, but I think given its history, the burden of proof is on
monolithic, not modular design. A better question may be whether a
distributed volunteer project can do real system architecture? (Where is
CERN when you need them?)

Who's history, Linux' (the kernel)? :p

I was thinking of Windows, but opened Pandora's box instead. :/

Couldn't it be that the fact that so many are embracing the "monolithic"
design of systemd is a sign that the modular design was... suboptimal
and nobody came up with a better one?


Umm..... no. In fact the leading edge is going in the other direction. Examples:

1. smartos (smartos.com) - latest and greatest out of opensolaris land (lean hypervisor - just enough os to run docker containers)

2. unikernels like mirage (http://www.openmirage.org/) - lean hypervisor layer to manage machine resources, then each application context is essentially a container with o/s like functions compiled in as libraries - os functions as modular libraries, just use those that are needed

3. virtual machine environments that run directly on a thin hypervisor - Erlang on Xen comes to mind (http://erlangonxen.org/)

4. And there are also attempts to run virtual machines on bare iron http://kerlnel.org/ (Erlang on bare iron) - and multiple projects that run Java virtual machines on bare iron

Arguably, the hypervisor layer is monolithic, but we're talking a very targeted set of functions that are a subset of kernel functions.

Miles Fidelman




--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/543ae813.1020...@meetinghouse.net

Reply via email to