On 03/27/2013 04:00 PM, Grant Edwards wrote:
> On 2013-03-27, Kevin Chadwick <ma1l1i...@yahoo.co.uk> wrote:
> 
>> The real drive behind systemd is enterprise cloud type computing for
>> Red Hat. The rest is snake oil and much of the features already exist
>> without systemd. With more snake oil of promises of faster boot up on a
>> portion of the code which is already fast and gains you maybe two
>> seconds.
> 
> I'm not trying to fan the flames: I'm genuinely confused...
> 
> I just don't get the whole "parallel startup for faster boot thing".
> Most of my machines just don't boot up often enough for a few seconds
> or even tens of seconds to matter at all.

With cloud-based computing, you don't have a bunch of servers running,
waiting to received requests.

Instead, you have is a bunch of idle hardware, waiting to have pre-built
system images spun up on them on-demand.

The faster those pre-built images can spin up, the faster they can serve
requests. The faster they can serve requests, the fewer mostly-idle
images need to be already running for immediate needs. Traffic on a web
service usually spins up gradually. In the middle of the night, it's
low, but it increases during certain hours and decreases during others.
(Even with things like social media, there's a gradual buildup of
resource demands, as it takes URLs a while to take fire and spread.)
Ultimately, if you can have just enough images running to manage
immediate demand plus a small burst margin, you can save on costs. If
demand eats into your burst, you spin up more instances until you're
below your burst margin again. If demand falls, you kill off the extra
instances.

The quicker the spin-up process, the more efficient the on-demand system
becomes, and the better the resource utilization (and value to the
person paying for the cloud services).

(Though, really, I'd think that the best way to handle this kind of load
would be a hibernate system with a sparse image for RAM, and driver
tweaks to allow hardware to swap out from underneath in the event of
hardware changes while asleep. Or handle things like MAC address
rewriting in the VM hypervisor.)

> 
> It seems to me that starting things in parallel would be inherintly
> much more difficult, bug-prone, and hard to troubleshoot.

Indeed.

> 
> Even on my laptop, which does get booted more than once every month or
> two, openrc is plenty fast enough.

The case for systemd is twofold:

1) Boot-to-desktop session management by one tool. (The same thing that
launches your cron daemon is what launches your favorite apps when you
log in.)
2) Reduce the amount of CPU and RAM consumed when you're talking about
booting tens of thousands of instances simultaneously across your entire
infrastructure, or when your server instance might be spun up and down
six times over the course of a single day.

> 
> Are there people who reboot their machines every few minutes and
> therefore need to shave a few seconds off their boot time?

On-demand server contexts, yes.

> 
> I can see how boot time matters for small embedded systems (routers,
> firewalls, etc.) that need to be up and running quickly after a power
> outage, but they're probably even less likely to be running systemd
> than desktops or servers.

Servers in cloud environments have one normal state: "Off". But when
they need to be "On", they need to get there hella quickly, or the
client is going to lose out on ad revenue when he starts getting a few
tens of thousands of visits per minute.


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to