Thanks, I made more debbuging with LVM and I realised, that lvm always uses the last device it has scanned. Scanning of devices is called by udev rules using "lvm pvscan --cache <device>" command. So the reason of using /dev/sdb2 instead of /dev/md126p2 is that udev runs lvm in the following order: 1. lvm pvscan --cash /dev/md126p2 2. lvm pvscan --cash /dev/sda2 3. lvm pvscan --cash /dev/sdb2
But there were no /dev/sda2 and /dev/sdb2 before running anaconda at all. [root@localhost ~]# ls -ld /dev/md* /dev/sd* drwxr-xr-x. 2 root root 120 May 29 03:43 /dev/md brw-rw----. 1 root disk 9, 126 May 29 03:43 /dev/md126 brw-rw----. 1 root disk 259, 0 May 29 03:43 /dev/md126p1 brw-rw----. 1 root disk 259, 1 May 29 03:43 /dev/md126p2 brw-rw----. 1 root disk 9, 127 May 29 03:43 /dev/md127 brw-rw----. 1 root disk 8, 0 May 29 03:43 /dev/sda brw-rw----. 1 root disk 8, 16 May 29 03:43 /dev/sdb brw-rw----. 1 root disk 8, 32 May 29 03:43 /dev/sdc brw-rw----. 1 root disk 8, 33 May 29 03:43 /dev/sdc1 brw-rw----. 1 root disk 8, 34 May 29 03:43 /dev/sdc2 brw-rw----. 1 root disk 8, 48 May 29 03:43 /dev/sdd brw-rw----. 1 root disk 8, 49 May 29 03:43 /dev/sdd1 brw-rw----. 1 root disk 8, 50 May 29 03:43 /dev/sdd2 brw-rw----. 1 root disk 8, 64 May 29 03:43 /dev/sde They appear only after launching anaconda: [root@localhost ~]# ls -ld /dev/md* /dev/sd* drwxr-xr-x. 2 root root 120 May 29 03:47 /dev/md brw-rw----. 1 root disk 9, 126 May 29 03:47 /dev/md126 brw-rw----. 1 root disk 259, 2 May 29 03:47 /dev/md126p1 brw-rw----. 1 root disk 259, 3 May 29 03:47 /dev/md126p2 brw-rw----. 1 root disk 9, 127 May 29 03:46 /dev/md127 brw-rw----. 1 root disk 8, 0 May 29 03:47 /dev/sda brw-rw----. 1 root disk 8, 1 May 29 03:47 /dev/sda1 brw-rw----. 1 root disk 8, 2 May 29 03:47 /dev/sda2 brw-rw----. 1 root disk 8, 16 May 29 03:47 /dev/sdb brw-rw----. 1 root disk 8, 17 May 29 03:47 /dev/sdb1 brw-rw----. 1 root disk 8, 18 May 29 03:47 /dev/sdb2 brw-rw----. 1 root disk 8, 32 May 29 03:46 /dev/sdc brw-rw----. 1 root disk 8, 33 May 29 03:46 /dev/sdc1 brw-rw----. 1 root disk 8, 34 May 29 03:46 /dev/sdc2 brw-rw----. 1 root disk 8, 48 May 29 03:47 /dev/sdd brw-rw----. 1 root disk 8, 49 May 29 03:47 /dev/sdd1 brw-rw----. 1 root disk 8, 50 May 29 03:47 /dev/sdd2 brw-rw----. 1 root disk 8, 64 May 29 03:47 /dev/sde So the root problem is not in lvm. The root problem is why devices "/sd[ab]?" appear? They shoud not exist because of /dev/sd[ab] are parts of /dev/md126 raid. I'm not insist that 'udevadm --settle' is the reason. But where should I make future research? 2015-05-28 13:06 GMT+03:00 Lennart Poettering <lenn...@poettering.net>: > On Thu, 28.05.15 11:10, Oleg Samarin (osamari...@gmail.com) wrote: > > > Hi! > > > > I have an imsm raid-1 device /dev/md126 assembled of /dev/sda and > /dev/sdb. > > I have a lvm group on top of /dev/md126p2 with some logical volumes. All > > this work fine with Fedora 21. > > > > I'm trying to fresh install Fedora 22 in some of lvm logical volume. I > boot > > with Fedora USB live media and run "Install to hard disk". But anaconda > > does not see any existing lvm volumes so I can not choose them as a > > destination. > > Please ask LVM people for help on this, the systemd mailing list is > really not the right forum for this. > > Thanks, > > Lennart > > -- > Lennart Poettering, Red Hat >
_______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel