In comparing the Herd3 casper.log to the Beta casper.log on my system, I
noted the following difference.  In reviewing the following scripts,
note that I am booting the LiveCD off of a USB device that contains the
LiveCD files in the manner described in one of the Ubuntu Wiki's.  Hence
the references to sdb1 below.  I follow exactly the same approach for
Herd3 and Beta, but persistence works for Herd3 and not the Beta
distribution.

The casper.log from the Beta distribution reads as follows.

Begin: Running /scripts/casper-premount ...
Done.
Done.
mount: Mounting /dev/sda1 on /cdrom failed: No such device
mount: Mounting /dev/sda1 on /cdrom failed: No such device
mount: Mounting /dev/sda1 on /cdrom failed: No such device
mount: Mounting /dev/sda1 on /cdrom failed: No such device
mount: Mounting /dev/sda1 on /cdrom failed: No such device
Done.

All of the "No such device" failures are missing in the Herd3
casper.log.  Again, don't know if this is relevant and I am a newbie,
but it appears that this failure is occurring while running the scripts
in the casper-premount directory.  Looking in that directory in the Beta
distribution, there is only the 10driver_updates script.  I have
repeated part of that script below.

#!/bin/sh

PREREQ=""
. /scripts/casper-functions
. /scripts/casper-helpers

[I HAVE OMITTED A PORTION OF THE SCRIPT HERE]

mountpoint=/cdrom

[I HAVE OMITTED ANOTHER PORTION OF THE SCRIPT HERE.]

check_dev_updates ()
{
    sysdev="${1}"
    devname="${2}"
    if [ -z "${devname}" ]; then
        devname=$(sys2dev "${sysdev}")
    fi

    fstype=$(get_fstype "${devname}")
    if is_supported_fs ${fstype}; then
        mount -t ${fstype} -o ro "${devname}" $mountpoint || continue
        if is_updates_path $mountpoint; then
            return 0
        else
            umount $mountpoint
        fi
    fi

    return 1
}

[I HAVE OMITTED THE REST OF THE SCRIPT.]

I did not see this script in Herd3 at all.  As best as I can tell, the
"mount" command  shown above is where the mount failure is occurring,
although I again am still in the process of learning Bash and Linux so I
don't really know enough to evaluate this further at this point.  While
this script appears to be looking for driver updates, it is mounting the
/cdrom directory.  I wasn't sure if the failure in mounting the /cdrom
directory might affect persistence in another script.  Even if it
doesn't, it does appear that one of the variables used in this mount
loop is wrong and that causes me to wonder whether that error may have
bearing on the persistence failure.  In particular, on my system, the
LiveCD cdrom files are located on sdb1 and not on sda1.  So, I assume
sdb1 should be mounted at /cdrom, not sda1 as it looks like this script
is trying to do.  On my system, my internal hard drive shows up as sda1.
I am not sure why that is the case.  It is my work laptop and I think it
is an internal IDE drive.  But it is that way on both Herd3 and the Beta
distribution, and this error does not appear when booting the Herd3
distribution.  It is an NTFS formatted drive which is probably why it is
not available during boot.  I haven't updated my fstab to mount it.
Even if I did, it  seems it should not be mounted at /cdrom.

-- 
feisty 20070210/herd5 persistent mode doesn't work
https://bugs.launchpad.net/bugs/84591
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to