It amazes me how much energy people spend on it to save 3 seconds a day. Of 
course, if the computer being turned daily. In the case of a server time saving 
is even more amazing. My 4 years old daughter no longer believes the merman, 
but many technically educated IT specialists still believe in lossless 
parallelization.  There is always a resource, which must be accessed 
sequentially. Is it a case of hardware such as CPU core, or a software resource 
with a semaphore. Changing processes of this resource is always spent some 
extra time. The school teaches that when a workman digs a well in 10 days, then 
2 workers for 5 days and 240 workers per hour.  And it is the mistake sytemd 
authors. When watching the load of the virtual machine that starts with systemd 
it is clear to me that the total CPU consumption is significantly greater than 
in the case of upstart one. And so much power to lose synchronization and the 
result is yet uncertain. I can only say: if I only have 3 free secon
 ds a day, I will watch with pleasure as my service accrues sequentially.


Jan F. Chadima
jchad...@redhat.com



-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Reply via email to