On 05/08/2011 03:41, Bryan Kadzban wrote: > Matthew Burgess wrote: >> But that raises the question of what that bootscript was trying to do >> in the first place? So, it turns out that the actions specified by >> 'RUN+=' udev rules can fail for any of a variety of reasons, and this >> script was simply there to retry such failed actions in the hope that >> something had (almost magically) changed since the rules were first >> triggered. I say "almost magically" because there's no smarts in the >> script at all, it just assumes that because it's being run later in >> the boot sequence, enough things will have been started up/mounted >> etc. to make the rules work again. > > Yes. It exists for two reasons: > > 1) To copy the temporary newly-created rules from wherever > write_{cd,net}_rules put them, into /etc/udev/rules.d so they persist > after the next reboot. The rule itself can't necessarily write them to > /etc because the rootfs is readonly until mountfs.
OK, that seems a valid thing for us to be doing. > 2) To rerun rules like the ALSA one (which runs alsactl, which is > installed in /usr/sbin, and as of the most recent alsa-utils release, > requires a /var/lib/alsa/asound.state file), which will fail if either > /usr or /var is on a separate partition. Then the alsa bootscript just needs moving to come after the mountfs script surely? What use case is there for needing to restore volume, etc. *before* you've even managed to mount /usr? > It looks like magic, but it's not; this script is simply numbered after > mountfs. The only thing it really supports retrying (via trigger) is > events that failed due to a missing filesystem. At one point that was > even in the comments in the script, though I haven't looked at it in a > while, so I don't know if it's still there. Well, as described above, that whole retry mechanism will be going away in the not too distant future, so we'll need to think about how to avoid such failures. > Their solution is to require an initramfs (SIGH), which mounts all > filesystems before transferring control to /sbin/init on the rootfs. Yes, I've noticed their leanings toward such a configuration, and also think its overkill, especially for LFS. > >> So, which of our rules may require such a retry attempt then? > > setclock (as you stated), alsactl from BLFS, and write_{cd,net}_rules > are the ones I know of that require the udev_retry script. I need to take a look at the write_{cd,net}_rules ones, but I could have sworn for the net ones, the book instructs folks to create 'static' rules for their net devices before rebooting. Why would those need to be retried? >> We can fix that up in a couple of ways though. Firstly, just ignore >> the FHS, and leave adjtime in its default location of /etc. >> Secondly, as Kay Sievers recommends in the thread above, never trust >> the hwclock at all; if you need an accurate system time, use NTP. > > Sigh. I hate shortsightedness. > > You need *something* to start with. NTP is not a magic fix-everything. > If the clock is too far off at boot time, NTP will fail to start up > until the clock gets less far off at boot time. NTP will also utterly > fail to save the clock across a reboot. Well, that last statement isn't quite true. NTP doesn't save the clock, but the kernel most certainly can be told to enter '11-minute mode', whereby it saves the system time to the hardware clock every 11 minutes. Therefore even when there's a machine crash, the BIOS is much closer to the proper time when it comes back up. I understand this is the preferred mode of operation in enterprise environments, and would certainly be my suggestion for folks needing accurate time. For the first couple of statements in that paragraph, the ntp script really should be doing a one-shot 'ntpdate <time source>' *before* starting the ntp daemon, and the ntp script should be run as soon as the network is brought up so that the system time is set correctly a.s.a.p. In my experience, that first ntpdate command will complete as quickly as it takes to hit the desired NTP server, so in a reasonably configured environment, shouldn't affect boot times much at all. > The ntpd package has a way to do a one-time sync of the system clock > from the configured ntp servers, except that it takes *many* seconds to > run; last time I did that in the ntpd bootscript, it doubled my boot > time. What needs to happen is a simple, short "I know this isn't > perfect but it's close enough that ntpd won't choke" synchronization of > the system time to some other time source, and ntpd doesn't provide it. > hwclock does, since the /dev/rtc clock was synced at shutdown. Like I said above, ntpdate can do that simple, short initial sync. And again, as I said above, /dev/rtc may not have been synced at shutdown if the machine crashed, so can not always be relied upon to provide an accurate time. > This is true (the system time coming from the BIOS) with hwclock. > That's what "hwclock --hctosys" reads from, after all. I do not believe > it's true without it; last I knew, without hwclock, the system would > start at time zero. (But it's been many years since I tried it.) Hmm, I'll give it a shot this evening, although heaven knows what might be going on in the VM world I'm running LFS in - my VM may end up having some smarts that picks the time up from the host...we'll see. Thanks, Matt. -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page