On 09/15/2011 02:38 PM, Bruce Dubbs wrote:
> References are threads starting at:
> http://www.linuxfromscratch.org/pipermail/lfs-dev/2011-August/064960.html
> http://linuxfromscratch.org/pipermail/lfs-dev/2011-September/065130.html
>
> We are in a Catch-22 situation with udev_retry.  Here's a rundown:
>
> We need to start udev (S10udev) before mounting filesystems (S40mountfs)
> so that the device entries are available in order to mount partitions.
>
> udev will create some devices and may run some programs before all file
> systems are mounted (setclock) that need directories that are
> potentially not mounted (/usr, /var).
>
> The same issues come up for BLFS in alsa.
>
> Currently we are addressing these types of problems with the command:
>
>     /sbin/udevadm trigger --type=failed --action=add
>
> in udev_retry.  The problem is that '--type=failed' has been deprecated
> upstream and we need to plan for that.  We also get a nasty warning
> message on every boot about the deprecation.
>
> In the infrequent case of a changed network card, we also need to be
> able to copy udev generated files from the tmpfs /run directory to /etc
> after / is remounted r/w, but that can be moved to the mountfs script
> from the udev_retry script.
>
> There are options about what to do right now:
>
> 1.  Leave in the warning message and optionally write something about it
> in the book.
>
> 2.  Add 2>/dev/null to the udevadm command above.
>
> 3.  Modify the source to remove the warning (delete 1 line).
Go with 3 for the time being, or see below....

> <snip>
>
> 4.  Reinsert the deleted retry code into udev with a patch.
>
>

Or recreate the functionality outside of udev. What is the consequence 
of manually triggering an add event that has already happened? From 
here, it looks like the device nodes are simply recreated (and rules 
processed appropriately). Look:

root@name64 [ /etc/udev/rules.d ]# echo "echo 'YES'> /etc/test.txt" >> 
/etc/init.d/setclock
root@name64 [ /etc/udev/rules.d ]# rm /etc/test.txt
rm: cannot remove `/etc/test.txt': No such file or directory
root@name64 [ /etc/udev/rules.d ]# ls -l /dev/rtc*
lrwxrwxrwx 1 root root      4 Sep 15 19:22 /dev/rtc -> rtc0
crw-r--r-- 1 root root 254, 0 Sep 15 19:22 /dev/rtc0
root@name64 [ /etc/udev/rules.d ]# udevadm trigger --subsystem-match=rtc 
--action=add
root@name64 [ /etc/udev/rules.d ]# cat /etc/test.txt
YES
root@name64 [ /etc/udev/rules.d ]# ls -l /dev/rtc*
lrwxrwxrwx 1 root root      4 Sep 15 19:24 /dev/rtc -> rtc0
crw-r--r-- 1 root root 254, 0 Sep 15 19:24 /dev/rtc0
root@name64 [ /etc/udev/rules.d ]#

As you might have guessed, I did that a few times before...hence only 2 
minute difference. Can we not simply re-trigger all known affected 
subsystems with a subsystem match? I don't really see the possibility of 
failure here, but I certainly am not the udev aficionado, so I could 
easily be missing something. Now, if that would work well enough, then 
just add a configuration file for the udev_retry bootscript so that it 
can be extended in BLFS for say ALSA, and then parse the list. In the 
udev_retry script, a for loop like so:

(
     failed=0
     for SUBSYSTEM in `grep -v '^#' /etc/udev_retry.conf`
     do
         udevadmin trigger --subsystem-match=$SUBSYSTEM --action=add || 
failed=1
     done
     exit $failed
)

Or function and return, or test on the variable or whatever works well 
in the context of that particular boot script...you could even write a 
message for each one if you wanted to have more verbose output in the 
event of a failure, or a stepping like we do in mountvirtfs.

> I'd like to see some more discussion about this.
>
>     -- Bruce
What ya'll think?

-- DJ Lucas


-- 
This message has been scanned for viruses and
dangerous content, and is believed to be clean.

-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to