I have two systems which both use LVM. I recently added disks to both systems and my system now has problems finding physical volumes at boot time. Only my sata drives are affected by this. On one of the systems I have no useful log messages or outputs since the /home, and /var are located on the volume group which can't be started, however I know that the problem is caused by add the new disk(pata) since when I disable the disk the system boots normally. The other systems sata disk doesn't contain any essential mount points so I can boot without that disk...heres the relevant output from /var/log/boot:
Wed Jan 4 17:18:58 2006: Setting up LVM Volume Groups... Wed Jan 4 17:18:58 2006: Reading all physical volumes. This may take a while... Wed Jan 4 17:18:58 2006: Found volume group "LVM" using metadata type lvm2 Wed Jan 4 17:18:58 2006: Couldn't find device with uuid 'OeGY4s-sFL1-gTkY-LHmi-qW7H-jgm4-11dm3e'. Wed Jan 4 17:18:58 2006: Couldn't find all physical volumes for volume group data. Wed Jan 4 17:18:58 2006: Volume group "data" not found Wed Jan 4 17:18:58 2006: /dev/LVM: opendir failed: No such file or directory Wed Jan 4 17:18:58 2006: /dev/LVM: opendir failed: No such file or directory Wed Jan 4 17:18:58 2006: /dev/LVM: opendir failed: No such file or directory Wed Jan 4 17:18:58 2006: /dev/LVM: opendir failed: No such file or directory Wed Jan 4 17:18:58 2006: Couldn't find device with uuid 'OeGY4s-sFL1-gTkY-LHmi-qW7H-jgm4-11dm3e'. Wed Jan 4 17:18:58 2006: Couldn't find all physical volumes for volume group data. Wed Jan 4 17:18:58 2006: Volume group "data" not found Wed Jan 4 17:18:58 2006: 4 logical volume(s) in volume group "LVM" now active Wed Jan 4 17:18:58 2006: Couldn't find device with uuid 'OeGY4s-sFL1-gTkY-LHmi-qW7H-jgm4-11dm3e'. Wed Jan 4 17:18:58 2006: Couldn't find all physical volumes for volume group data. Wed Jan 4 17:18:58 2006: Unable to find volume group "data" Wed Jan 4 17:18:58 2006: Checking all file systems... Wed Jan 4 17:18:58 2006: fsck 1.37 (21-Mar-2005) Wed Jan 4 17:18:58 2006: /home: clean, 672/5046272 files, 190265/10084352 blocks Wed Jan 4 17:18:58 2006: /opt: clean, 11/256032 files, 40397/512000 blocks Wed Jan 4 17:18:58 2006: /usr: clean, 77528/2621440 files, 365358/5242880 blocks Wed Jan 4 17:18:58 2006: /var: clean, 24799/1310720 files, 243112/2621440 blocks Wed Jan 4 17:18:58 2006: fsck.ext3: No such file or directory while trying to open /dev/mapper/data-files Wed Jan 4 17:18:58 2006: /dev/mapper/data-files: Wed Jan 4 17:18:58 2006: The superblock could not be read or does not describe a correct ext2 Wed Jan 4 17:18:58 2006: filesystem. If the device is valid and it really contains an ext2 Wed Jan 4 17:18:58 2006: filesystem (and not swap or ufs or something else), then the superblock Wed Jan 4 17:18:58 2006: is corrupt, and you might try running e2fsck with an alternate superblock: Wed Jan 4 17:18:58 2006: e2fsck -b 8193 <device> Wed Jan 4 17:18:58 2006: Wed Jan 4 17:18:58 2006: fsck.ext3: No such file or directory while trying to open /dev/mapper/data-ftp Wed Jan 4 17:18:58 2006: /dev/mapper/data-ftp: Wed Jan 4 17:18:58 2006: The superblock could not be read or does not describe a correct ext2 Wed Jan 4 17:18:58 2006: filesystem. If the device is valid and it really contains an ext2 Wed Jan 4 17:18:58 2006: filesystem (and not swap or ufs or something else), then the superblock Wed Jan 4 17:18:58 2006: is corrupt, and you might try running e2fsck with an alternate superblock: Wed Jan 4 17:18:58 2006: e2fsck -b 8193 <device> Wed Jan 4 17:18:58 2006: Wed Jan 4 17:18:58 2006: fsck.ext2: No such file or directory while trying to open /dev/mapper/data-backup Wed Jan 4 17:18:58 2006: /dev/mapper/data-backup: Wed Jan 4 17:18:58 2006: The superblock could not be read or does not describe a correct ext2 Wed Jan 4 17:18:58 2006: filesystem. If the device is valid and it really contains an ext2 Wed Jan 4 17:18:58 2006: filesystem (and not swap or ufs or something else), then the superblock Wed Jan 4 17:18:58 2006: is corrupt, and you might try running e2fsck with an alternate superblock: Wed Jan 4 17:18:58 2006: e2fsck -b 8193 <device> Wed Jan 4 17:18:58 2006: Wed Jan 4 17:18:58 2006: fsck.ext3: No such file or directory while trying to open /dev/mapper/data-cvs Wed Jan 4 17:18:58 2006: /dev/mapper/data-cvs: Wed Jan 4 17:18:58 2006: The superblock could not be read or does not describe a correct ext2 Wed Jan 4 17:18:58 2006: filesystem. If the device is valid and it really contains an ext2 Wed Jan 4 17:18:58 2006: filesystem (and not swap or ufs or something else), then the superblock Wed Jan 4 17:18:58 2006: is corrupt, and you might try running e2fsck with an alternate superblock: Wed Jan 4 17:18:58 2006: e2fsck -b 8193 <device> Wed Jan 4 17:18:58 2006: Wed Jan 4 17:18:58 2006: Wed Jan 4 17:18:58 2006: fsck failed. Please repair manually. Wed Jan 4 17:18:58 2006: Wed Jan 4 17:18:58 2006: CONTROL-D will exit from this shell and continue system startup. Wed Jan 4 17:18:58 2006: Wed Jan 4 17:18:58 2006: Give root password for maintenance Wed Jan 4 17:18:58 2006: (or type Control-D to continue): if I then type ctrl-D to continue the system boots without error (except that mount -a fails to mount the afflected filesystems) once the system is started vgscan or vgchange -a y gives the same output as seen in the boot log Wed Jan 4 17:18:58 2006: Reading all physical volumes. This may take a while... Wed Jan 4 17:18:58 2006: Found volume group "LVM" using metadata type lvm2 Wed Jan 4 17:18:58 2006: Couldn't find device with uuid 'OeGY4s-sFL1-gTkY-LHmi-qW7H-jgm4-11dm3e'. Wed Jan 4 17:18:58 2006: Couldn't find all physical volumes for volume group data. Wed Jan 4 17:18:58 2006: Volume group "data" not found but if I do pvscan -u which gives: PV /dev/hda4 with UUID FG2ZsQ-yDFm-yITh-KYUv-DBWg-t9DS-R9C1kq VG LVM lvm2 [68.96 GB / 0 free] PV /dev/sda1 with UUID OeGY4s-sFL1-gTkY-LHmi-qW7H-jgm4-11dm3e VG data lvm2 [279.47 GB / 0 free] PV /dev/hdc1 with UUID cYLItd-IbpQ-27aY-ycuX-0hLQ-9wE5-72wxk5 VG data lvm2 [186.30 GB / 153.80 GB free] Total: 3 [534.73 GB] / in use: 3 [534.73 GB] / in no VG: 0 [0 ] which is the output that would normally occur (you can see the afflicted pv is /dev/sda1) after doing pvscan I can now do vgchange -a y and activate all volume groups normally. I'm not sure what cause this to occur but like I said I recently added new disks to the system, and on my other system I have the same problem (also with my sata partitions) after I add a new disk. On the other system disabling the new disk fixes the problem. I cannot disable the new disk on this system since it is part of the afflicted volume group and contains data. Has anyone had a similar problem. Does anyone know a solution? Thanks for your help. Here is my lvm.conf: # This is an example configuration file for the LVM2 system. # It contains the default settings that would be used if there was no # /etc/lvm/lvm.conf file. # # Refer to 'man lvm.conf' for further information including the file layout. # # To put this file in a different directory and override /etc/lvm set # the environment variable LVM_SYSTEM_DIR before running the tools. # This section allows you to configure which block devices should # be used by the LVM system. devices { # Where do you want your volume groups to appear ? dir = "/dev" # An array of directories that contain the device nodes you wish # to use with LVM2. scan = [ "/dev" ] # A filter that tells LVM2 to only use a restricted set of devices. # The filter consists of an array of regular expressions. These # expressions can be delimited by a character of your choice, and # prefixed with either an 'a' (for accept) or 'r' (for reject). # The first expression found to match a device name determines if # the device will be accepted or rejected (ignored). Devices that # don't match any patterns are accepted. # Be careful if there there are symbolic links or multiple filesystem # entries for the same device as each name is checked separately against # the list of patterns. The effect is that if any name matches any 'a' # pattern, the device is accepted; otherwise if any name matches any 'r' # pattern it is rejected; otherwise it is accepted. # Remember to run vgscan after you change this parameter to ensure # that the cache file gets regenerated (see below). # By default we accept every block device # filter = [ "a/.*/" ] # Exclude the cdrom drive filter = [ "r|/dev/cdrom|" ] # When testing I like to work with just loopback devices: # filter = [ "a/loop/", "r/.*/" ] # Or maybe all loops and ide drives except hdc: # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ] # Use anchors if you want to be really specific # filter = [ "a|^/dev/hda8$|", "r/.*/" ] # The results of the filtering are cached on disk to avoid # rescanning dud devices (which can take a very long time). By # default this cache file is hidden in the /etc/lvm directory. # It is safe to delete this file: the tools regenerate it. cache = "/etc/lvm/.cache" # You can turn off writing this cache file by setting this to 0. write_cache_state = 1 # Advanced settings. # List of pairs of additional acceptable block device types found # in /proc/devices with maximum (non-zero) number of partitions. # types = [ "fd", 16 ] # If sysfs is mounted (2.6 kernels) restrict device scanning to # the block devices it believes are valid. # 1 enables; 0 disables. sysfs_scan = 1 # By default, LVM2 will ignore devices used as components of # software RAID (md) devices by looking for md superblocks. # 1 enables; 0 disables. md_component_detection = 1 } # This section that allows you to configure the nature of the # information that LVM2 reports. log { # Controls the messages sent to stdout or stderr. # There are three levels of verbosity, 3 being the most verbose. verbose = 0 # Should we send log messages through syslog? # 1 is yes; 0 is no. syslog = 1 # Should we log error and debug messages to a file? # By default there is no log file. #file = "/var/log/lvm2.log" # Should we overwrite the log file each time the program is run? # By default we append. overwrite = 0 # What level of log messages should we send to the log file and/or syslog? # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive. # 7 is the most verbose (LOG_DEBUG). level = 0 # Format of output messages # Whether or not (1 or 0) to indent messages according to their severity indent = 1 # Whether or not (1 or 0) to display the command name on each line output command_names = 0 # A prefix to use before the message text (but after the command name, # if selected). Default is two spaces, so you can see/grep the severity # of each message. prefix = " " # To make the messages look similar to the original LVM tools use: # indent = 0 # command_names = 1 # prefix = " -- " # Set this if you want log messages during activation. # Don't use this in low memory situations (can deadlock). # activation = 0 } # Configuration of metadata backups and archiving. In LVM2 when we # talk about a 'backup' we mean making a copy of the metadata for the # *current* system. The 'archive' contains old metadata configurations. # Backups are stored in a human readeable text format. backup { # Should we maintain a backup of the current metadata configuration ? # Use 1 for Yes; 0 for No. # Think very hard before turning this off! backup = 1 # Where shall we keep it ? # Remember to back up this directory regularly! backup_dir = "/etc/lvm/backup" # Should we maintain an archive of old metadata configurations. # Use 1 for Yes; 0 for No. # On by default. Think very hard before turning this off. archive = 1 # Where should archived files go ? # Remember to back up this directory regularly! archive_dir = "/etc/lvm/archive" # What is the minimum number of archive files you wish to keep ? retain_min = 10 # What is the minimum time you wish to keep an archive file for ? retain_days = 30 } # Settings for the running LVM2 in shell (readline) mode. shell { # Number of lines of history to store in ~/.lvm_history history_size = 100 } # Miscellaneous global LVM2 settings global { # The file creation mask for any files and directories created. # Interpreted as octal if the first digit is zero. umask = 077 # Allow other users to read the files #umask = 022 # Enabling test mode means that no changes to the on disk metadata # will be made. Equivalent to having the -t option on every # command. Defaults to off. test = 0 # Whether or not to communicate with the kernel device-mapper. # Set to 0 if you want to use the tools to manipulate LVM metadata # without activating any logical volumes. # If the device-mapper kernel driver is not present in your kernel # setting this to 0 should suppress the error messages. activation = 1 # If we can't communicate with device-mapper, should we try running # the LVM1 tools? # This option only applies to 2.4 kernels and is provided to help you # switch between device-mapper kernels and LVM1 kernels. # The LVM1 tools need to be installed with .lvm1 suffices # e.g. vgscan.lvm1 and they will stop working after you start using # the new lvm2 on-disk metadata format. # The default value is set when the tools are built. # fallback_to_lvm1 = 0 # The default metadata format that commands should use - "lvm1" or "lvm2". # The command line override is -M1 or -M2. # Defaults to "lvm1" if compiled in, else "lvm2". # format = "lvm1" # Location of proc filesystem proc = "/proc" # Type of locking to use. Defaults to file-based locking (1). # Turn locking off by setting to 0 (dangerous: risks metadata corruption # if LVM2 commands get run concurrently). locking_type = 1 # Local non-LV directory that holds file-based locks while commands are # in progress. A directory like /tmp that may get wiped on reboot is OK. locking_dir = "/var/lock/lvm" # Other entries can go here to allow you to load shared libraries # e.g. if support for LVM1 metadata was compiled as a shared library use # format_libraries = "liblvm2format1.so" # Full pathnames can be given. # Search this directory first for shared libraries. # library_dir = "/lib/lvm2" # Enable these three for cluster LVM when clvmd is running. # Remember to remove the "locking_type = 1" above. # # locking_library = "liblvm2clusterlock.so" # locking_type = 2 # library_dir = "/lib/lvm2" } activation { # Device used in place of missing stripes if activating incomplete volume. # For now, you need to set this up yourself first (e.g. with 'dmsetup') # For example, you could make it return I/O errors using the 'error' # target or make it return zeros. missing_stripe_filler = "/dev/ioerror" # Size (in KB) of each copy operation when mirroring mirror_region_size = 512 # How much stack (in KB) to reserve for use while devices suspended reserved_stack = 256 # How much memory (in KB) to reserve for use while devices suspended reserved_memory = 8192 # Nice value used while devices suspended process_priority = -18 # If volume_list is defined, each LV is only activated if there is a # match against the list. # "vgname" and "vgname/lvname" are matched exactly. # "@tag" matches any tag set in the LV or VG. # "@*" matches if any tag defined on the host is also set in the LV or VG # # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] } #################### # Advanced section # #################### # Metadata settings # # metadata { # Default number of copies of metadata to hold on each PV. 0, 1 or 2. # It's best to leave this at 2. # You might want to override it from the command line with 0 or 1 # when running pvcreate on new PVs which are to be added to large VGs. # pvmetadatacopies = 2 # Approximate default size of on-disk metadata areas in sectors. # You should increase this if you have large volume groups or # you want to retain a large on-disk history of your metadata changes. # pvmetadatasize = 255 # List of directories holding live copies of text format metadata. # These directories must not be on logical volumes! # It's possible to use LVM2 with a couple of directories here, # preferably on different (non-LV) filesystems, and with no other # on-disk metadata (pvmetadatacopies = 0). Or this can be in # addition to on-disk metadata areas. # The feature was originally added to simplify testing and is not # supported under low memory situations - the machine could lock up. # # Never edit any files in these directories by hand unless you # you are absolutely sure you know what you are doing! Use # the supplied toolset to make changes (e.g. vgcfgrestore). # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ] #} -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]