Re: [DNG] Technical overview of init systems

2017-08-08 Thread Jaromil

dear Joel,


On Mon, 07 Aug 2017, Joel Roth wrote:

> I just came across this seven-part series of articles on
> supervisors and init systems:
> 
> https://blog.afoolishmanifesto.com/tags/supervisors/

thanks for the link.

the author still misses important points in lacking an analysis of
openRC esp. when regression on legacy UNIX systems is important, plus
there is no mention of LXC, LXC2 and LXD. However I personally share
his implicit praise of s6 and runit.

what I bring home after reading this is the idea of a supervisor that
manages cgroups and LXC containers in a simple way and, to inherit
some standardised work being done in systemd, supports its service
units.

if I'd be up for writing something like this, I'd use a LISP dialect
(guile?) and heavily rely on LXD / LXC2. That would be my dream
system, wondering if GNU shepherd covers this case? I still have to
study it and yea BTW also omitting shepherd (aka DMD) makes this
article still very incomplete. but good read!

looking forward to more opinions and pointers

ciao

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Technical overview of init systems

2017-08-08 Thread Steve Litt
On Mon, 07 Aug 2017 17:25:16 -0500
goli...@dyne.org wrote:

> On 2017-08-07 16:41, Joel Roth wrote:
> > Hi,
> > 
> > I just came across this seven-part series of articles on
> > supervisors and init systems:
> > 
> > https://blog.afoolishmanifesto.com/tags/supervisors/  
> 
> [snip]

> 
> Will be interested to see what Steve Litt has to say about this.
> 
> golinux

From a factual and technical basis, I haven't seen anything wrong with
it, though I know very little about most of those systems.

What the author missed was the politics and practicality surrounding
the systems. Upstart will almost certainly never be used again. Systemd
is a gargantuan, interchangeablity killing mess, which the author said,
but in the lightest possible way.

I'm skeptical of nosh because the nosh guy continually sings the
praises of cgroups and socket activation. This isn't a technical
criticism, but I don't think the init world is a meritocracy.

SteveT

Steve Litt 
July 2017 featured book: Quit Joblessness: Start Your Own Business
http://www.troubleshooters.com/startbiz
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Technical overview of init systems

2017-08-08 Thread Steve Litt
On Tue, 8 Aug 2017 11:11:20 +0200
Jaromil  wrote:


> what I bring home after reading this is the idea of a supervisor that
> manages cgroups and LXC containers in a simple way and, to inherit
> some standardised work being done in systemd, supports its service
> units.

Be careful recommending cgroups.

I've never used them, and know little about them, but I know they were
one of the main excuses for systemd.

Look on Wikipedia's cgroups page. In the beginning, Google needed a way
to throttle and account for resources of whole groups of processes, so
Paul Menage and Rohit Seth wrote cgroups and got it into the kernel in
January 2008, at which time it was a feature invisible to all but those
engaging in Google-scale useages.

In 2013[2] a Red Hat guy[1] named Tejun Heo began rewriting and
redesigning cgroups.[3] The diagram on the cgroups Wikipedia page
sports diagrams just like systemd diagrams, and notes that
FreeDesktop.Org is involved in this kernel feature. Then there's this:

===
Under former Red Hat Linux kernel developer and a principal software
engineer Tejun Heo’s stewardship, cgroups underwent a massive redesign, 
replacing multiple, per-controller cgroup hierarchies with a “single
kernel cgroup hierarchy…  [that] allow[s] controllers to be 
individually enabled for each cgroup” and is the “private property of
systemd.” These changes, especially when combined with functionality 
in systemd, increased the consistency and manageability of cgroups.
===
[4]

"Especially when combined with the functionality in systemd."

Did Heo et-al make cgroups harder to use without systemd? I don't know.
But I look at the timing, the complete rewrite throwing away the old
code, the involvement of Redhat and FreeDesktop.Org, and I can't help
thinking I've seen this movie before.

At this point we might be doing ourselves a disservice by thinking an
interaction with cgroups is an advantage of a given init.
 
SteveT

[1]
https://www.linux.com/news/linux-kernel-developer-work-spaces-video-tejun-heo-red-hat

[2] https://en.wikipedia.org/wiki/Cgroups#Versions

[3] https://en.wikipedia.org/wiki/Cgroups#Versions

[4] http://rhelblog.redhat.com/2015/08/28/the-history-of-containers/


Steve Litt 
July 2017 featured book: Quit Joblessness: Start Your Own Business
http://www.troubleshooters.com/startbiz
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


[DNG] Technical overview of init systems

2017-08-08 Thread Edward Bartolo
I had a look at the text and was not impressed at all. My criticism
is: it is written like some private correspondence instead of
technical objective text. Someone writing technical text must be
objective, scientific, accurate and concise.
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Technical overview of init systems

2017-08-08 Thread Miles Fidelman

Me neither.

I found the "7 part series" amazingly content free, and certainly not 
very technical.


First off, it wasn't about init systems, it was about supervisors (in 
fairness, it didn't actually purport to be about init systems).


Second,  nowhere did it actually talk about, in detail, what a 
supervisor does, and underlying theory of operations (what you'd expect 
in a "technical overview" to begin with).  Instead, it was a rather 
rambling, and unorganized description of a bunch of different supervisors.


Now what might have been useful would be:

1. An actual technical overview, including a definition of terms.

2. A comparison chart listing all of the various supervisors available.

As it is, it's just a waste of time.

Miles Fidelman



On 8/8/17 9:02 AM, Edward Bartolo wrote:

I had a look at the text and was not impressed at all. My criticism
is: it is written like some private correspondence instead of
technical objective text. Someone writing technical text must be
objective, scientific, accurate and concise.
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Technical overview of init systems

2017-08-08 Thread Adam Borowski
On Tue, Aug 08, 2017 at 11:53:56AM -0400, Steve Litt wrote:
> Be careful recommending cgroups.
> 
> I've never used them, and know little about them, but I know they were
> one of the main excuses for systemd.

Uhm, what?  Systemd uses ELF objects too, should we go with a.out for this
reason?

cgroups are a way to say "this group of processes may not use more than 2GB
memory".  How else would you ensure a misbehaving set of daemons / container
/etc does not bring down the rest of the system with it?

Then, you can set a lower limit to a subgroup of _those_ processes, in a
hierarchical way.

There are cgroup limits for a lot of resources.  Try for example tc cgroup
that allows you set HTB classes, so a lxc virtual server belonging to one
client can't bandwidth-drown the rest, yet receives all unused bandwidth
when there's no contention.

Same for CPU use, I/O, etc, etc.


Systemd usurps to be the only user of this facility, but if you don't suffer
from systemd infestation, nothing keeps you from doing so yourself.  In
fact, it works far better without systemd: unless it was fixed while I
wasn't looking, because of the way systemd sets it up, you can't use cgroups
in a container unless that container's systemd talks to the host's systemd
-- which is fragile, requires that both are infested, that both use a
compatible version, and using a different distribution or architecture makes
it no go.  I for one run a bunch of amd64, i386 and x32 containers that
range from wheezy to unstable, and all is fine.  Ok, I guess I'd have
problems running systemd inside, but I can accept _this_ retriction.


Meow!
-- 
⢀⣴⠾⠻⢶⣦⠀ 
⣾⠁⢰⠒⠀⣿⡁ James Damore is a hero.  Even mild criticism of bigots these days
⢿⡄⠘⠷⠚⠋⠀ comes at great personal risk.
⠈⠳⣄ 
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Technical overview of init systems

2017-08-08 Thread Martin Steigerwald
Adam Borowski - 08.08.17, 18:57:
> On Tue, Aug 08, 2017 at 11:53:56AM -0400, Steve Litt wrote:
> > Be careful recommending cgroups.
> > 
> > I've never used them, and know little about them, but I know they were
> > one of the main excuses for systemd.
> 
> Uhm, what?  Systemd uses ELF objects too, should we go with a.out for this
> reason?
> 
> cgroups are a way to say "this group of processes may not use more than 2GB
> memory".  How else would you ensure a misbehaving set of daemons / container
> /etc does not bring down the rest of the system with it?

I agree that cgroups can be a useful feature. Yet… also a bit clumsy to use, 
and not free of race conditions. That written, kernel developers are working 
to fix part of the clumsyness and completely and all of the race conditions by 
unifying all cgroup controllers (memory, cpu and so on) in one directory tree.

> Systemd usurps to be the only user of this facility, but if you don't suffer
> from systemd infestation, nothing keeps you from doing so yourself.  In
> fact, it works far better without systemd: unless it was fixed while I
> wasn't looking, because of the way systemd sets it up, you can't use
> cgroups in a container unless that container's systemd talks to the host's
> systemd -- which is fragile, requires that both are infested, that both use
> a compatible version, and using a different distribution or architecture
> makes it no go.  I for one run a bunch of amd64, i386 and x32 containers
> that range from wheezy to unstable, and all is fine.  Ok, I guess I'd have
> problems running systemd inside, but I can accept _this_ retriction.

Also I still not completely get *how* it actually sets it up.

But there are further issues. Systemd is completely tied to cgroups as the 
only means to control processes. Thats part of the reason why Systemd is not 
portable.

I know Systemd developers argued countless of times, that it would not be 
possible to separate out the cgroup handling into another process than PID 1, 
but even if thats a case… why not allow different in process modules to handle 
service supervision. And why not limit the usecase of PID 1 to exactly that, 
being an init system that starts services and supervises them.

But on the other hand… I still not get… why it would not to have each service 
started by new child process of PID 1, that sets up cgroup limits and 
supervises the service process… so that PID 1 just has to supervise that child 
process and react when it sends a signal or exits. This sounds a bit like the 
approach runit is using, but AFAIK runit´s runsv doesn´t support control 
cgroups.

What I see is that Systemd PID 1 gets larger and larger and larger (Debian 
Sid):

% ls -l /lib/systemd/systemd
-rwxr-xr-x 1 root root 1514528 Jul 20 15:13 /lib/systemd/systemd

It has been almost 200 KiB less with Debian 8:

% ls -l /lib/systemd/systemd
-rwxr-xr-x 1 root root 1313160 Apr  8 23:08 /lib/systemd/systemd

Sysvinit´s init process was just about 40 KiB.

Thanks,
-- 
Martin
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


[DNG] Just out of curiosity, I wondered,

2017-08-08 Thread zap
how do you enable internet in a virtual machine with qemu?

I wanted to try to see how effectively certain distros such as gnuinos
and vuu-do work through qemu with upgrading actually working...

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Just out of curiosity, I wondered,

2017-08-08 Thread Stefan Krusche
Am Dienstag 08 August 2017 schrieb zap:
> how do you enable internet in a virtual machine with qemu?
>
> I wanted to try to see how effectively certain distros such as gnuinos
> and vuu-do work through qemu with upgrading actually working...
>
> ___
> Dng mailing list
> Dng@lists.dyne.org
> https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

I start VMs with qemu with the option "-net nic ", which provides a network 
device of/to the VM and have internet access without further configuring 
anything. See also qemu man page / documentation.

Regards, Stefan

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Just out of curiosity, I wondered,

2017-08-08 Thread m712
It should Just Work(TM), as it did on every OS I have tested myself. You might 
want to take a look at QEMU docs and try playing with the network card 
emulation options.

On August 9, 2017 12:28:08 AM GMT+03:00, zap  wrote:
>how do you enable internet in a virtual machine with qemu?
>
>I wanted to try to see how effectively certain distros such as gnuinos
>and vuu-do work through qemu with upgrading actually working...
>
>___
>Dng mailing list
>Dng@lists.dyne.org
>https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

--- :^) --- :^) --- :^) --- :^) --- :^) --- :^) --- :^) --- :^) ---
https://blaze.nextchan.org - https://gitgud.io/m712/blazechan
https://nextchan.org - https://gitgud.io/nextchan/infinity-next
I am awake between 7AM-12AM UTC, hit me up if something's wrong

signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Just out of curiosity, I wondered,

2017-08-08 Thread Adam Borowski
On Wed, Aug 09, 2017 at 01:13:55AM +0200, Stefan Krusche wrote:
> Am Dienstag 08 August 2017 schrieb zap:
> > how do you enable internet in a virtual machine with qemu?
> >
> > I wanted to try to see how effectively certain distros such as gnuinos
> > and vuu-do work through qemu with upgrading actually working...
> 
> I start VMs with qemu with the option "-net nic ", which provides a network 
> device of/to the VM and have internet access without further configuring 
> anything. See also qemu man page / documentation.

Note that this, user-mode networking, has its downsides, like ping not
working, troubles with listening (requires explicit config, no privileged
ports, etc).  The official documentation recommends vlans, which take some
effort.

My personal favourite is bridged mode, which has only an one-time setup
cost, and makes guest VMs operate exactly same as if they were physically
separate machines plugged into your ethernet switch next to the host.
As a bonus, that setup cost is shared with lxc, which is also happy in
such a bridged configuration.

Not sure if all of setup steps below are still needed, they were ~5 years
ago:

* make /usr/lib/qemu/qemu-bridge-helper setuid root
* put "allow br0" into /etc/qemu/bridge.conf
* move network configuration from eth0 (or ens12345deadbeef678) to br0:
/etc/network/interfaces:

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet6 static
bridge_ports eth0
address 2001:dead:beef::42
netmask 64
gateway 2001:dead:beef::1
iface br0 inet static
address 10.0.0.42
... yadda yadda yadda

(or just dhcp, whatever you use -- just move everything you have on eth0 to
br0, make eth0 "manual")
* pass "-net bridge -net nic" to qemu


Meow!
-- 
⢀⣴⠾⠻⢶⣦⠀ 
⣾⠁⢰⠒⠀⣿⡁ James Damore is a hero.  Even mild criticism of bigots these days
⢿⡄⠘⠷⠚⠋⠀ comes at great personal risk.
⠈⠳⣄ 
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng