On Fri, Aug 5, 2011 at 2:08 AM, Matthew Burgess < matt...@linuxfromscratch.org> wrote:
> On Fri, 5 Aug 2011 01:06:52 -0700, Nathan Coulson <conat...@gmail.com> > wrote: > > On Fri, Aug 5, 2011 at 12:12 AM, Matthew Burgess < > > matt...@linuxfromscratch.org> wrote: > > > >> On 05/08/2011 03:41, Bryan Kadzban wrote: > >> > Matthew Burgess wrote: > >> >> But that raises the question of what that bootscript was trying to do > >> >> in the first place? So, it turns out that the actions specified by > >> >> 'RUN+=' udev rules can fail for any of a variety of reasons, and this > >> >> script was simply there to retry such failed actions in the hope that > >> >> something had (almost magically) changed since the rules were first > >> >> triggered. I say "almost magically" because there's no smarts in the > >> >> script at all, it just assumes that because it's being run later in > >> >> the boot sequence, enough things will have been started up/mounted > >> >> etc. to make the rules work again. > >> > > >> > Yes. It exists for two reasons: > >> > > >> > 1) To copy the temporary newly-created rules from wherever > >> > write_{cd,net}_rules put them, into /etc/udev/rules.d so they persist > >> > after the next reboot. The rule itself can't necessarily write them > > to > >> > /etc because the rootfs is readonly until mountfs. > >> > >> OK, that seems a valid thing for us to be doing. > >> > >> > 2) To rerun rules like the ALSA one (which runs alsactl, which is > >> > installed in /usr/sbin, and as of the most recent alsa-utils release, > >> > requires a /var/lib/alsa/asound.state file), which will fail if either > >> > /usr or /var is on a separate partition. > >> > >> Then the alsa bootscript just needs moving to come after the mountfs > >> script surely? What use case is there for needing to restore volume, > >> etc. *before* you've even managed to mount /usr? > >> > > > > Alsa volumes are restored when the device appears on the system via udev, > > not the bootscript anymore. (no S##alsa in rc*.d). and this does make > > sense for hotplugging devices. on my system, nothing calls > > /etc/rc.d/init.d/alsa start > > Ah, right. Yes, it makes complete sense, but then without an initrd we've > got seemingly conflicting requirements: > > 1) The detection of the alsa device will trigger the bootscript which > requires /usr & /var to be mounted, but they may not be available > because mountfs hasn't been run yet > 2) If we revert back to a bootscript, we lose the ability to hotplug > devices. > > >> > Their solution is to require an initramfs (SIGH), which mounts all > >> > filesystems before transferring control to /sbin/init on the rootfs. > >> > >> Yes, I've noticed their leanings toward such a configuration, and also > >> think its overkill, especially for LFS. > > But now I really don't think we have a choice if we want to support > fully hotplugging devices with scripts being called from udev's RUN+= > mechanism. Or, we simply state that if you want a correctly configured > system, either ensure that you put everything on a single partition > (which LFS would default to), or create an initrd (and point to a > relevant hint on how to do so). > alsa could be fixed... using a combination of udev and bootscripts. (udev for hotplugging, /etc/rc.d/init.d/alsa start for boot). hwtosys though... a seperate /usr, I believed in supported it in the bootscripts because there were no headaches (pre udev though). I never did have a use case for why I would want a seperate /usr other then allowing people to have the choice. /var on the other hand should be an option. is it only ntp that uses /var? before running the udev bootscript? I suppose there could be other requiraments... Whatever solution we choose, I would like a readonly / to be an option w/o a initrd/initramfs. One thought though, all of our problems stem from udev running before mountfs. I have not dug into udev's behavior too much, but I imagine it is the --trigger command that populates /dev/{sd*,sr*,hd*} It looks like we can do something like the following... --udev, only trigger block devices --mountfs --udev remaining that way, devices have a fully mounted system before they attempt to run or a simpler solution, could look into > >> > This is true (the system time coming from the BIOS) with hwclock. > >> > That's what "hwclock --hctosys" reads from, after all. I do not > > believe > >> > it's true without it; last I knew, without hwclock, the system would > >> > start at time zero. (But it's been many years since I tried it.) > >> > >> Hmm, I'll give it a shot this evening, although heaven knows what might > >> be going on in the VM world I'm running LFS in - my VM may end up having > >> some smarts that picks the time up from the host...we'll see. > >> > > > > By default, it does not set the time. > > > > There is a optional kernel option (Introduced in the last 10-20 kernel > > releases, can't recall when), CONFIG_RTC_HCTOSYS that will set the > system > > time to the hwclock time. > > Nice, and it looks like I must have turned that on since it became > available. > So, with that option set, one wouldn't need the setclock script at all > then; > the kernel will read the BIOS clock by itself at startup, and there's no > point > in saving the system time to the BIOS clock unless you have something like > NTP > running. > wonder if it uses utc or not... seems like if it works in all cases that it would solve some headaches. Regards, > > Matt. > > -- > http://linuxfromscratch.org/mailman/listinfo/lfs-dev > FAQ: http://www.linuxfromscratch.org/faq/ > Unsubscribe: See the above information page > -- Nathan Coulson (conathan) ------ Location: British Columbia, Canada Timezone: PST (-8) Webpage: http://www.nathancoulson.com
-- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page