This is repeatable on a fresh installation of Karmic installed used
guided partitioning with lvm and allocating the entire first disk to
lvm. Then I add a second disk to the root volumegroup which causes grub
to segfault.

In my case, I ran this on a guest in VMware ESX. 
r...@portwise:~# grub-probe / --target=abstraction
lvm
r...@portwise:~# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
r...@portwise:~# vgextend portwise /dev/sdb1
  Volume group "portwise" successfully extended
r...@portwise:~# grub-probe / --target=abstraction
Segmentation fault
r...@portwise:~# vgreduce portwise /dev/sdb1
  Removed "/dev/sdb1" from volume group "portwise"
r...@portwise:~# grub-probe / --target=abstraction
lvm


r...@portwise:~# vgdisplay 
  --- Volume group ---
  VG Name               portwise
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               7.76 GB
  PE Size               4.00 MB
  Total PE              1986
  Alloc PE / Size       1978 / 7.73 GB
  Free  PE / Size       8 / 32.00 MB
  VG UUID               kg1HJo-SLhd-b38k-CSlP-1Xjg-MJfm-NQeVEO
   
r...@portwise:~# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               portwise
  PV Size               7.76 GB / not usable 2.18 MB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              1986
  Free PE               8
  Allocated PE          1978
  PV UUID               xf0qok-1aTX-Fjct-AANy-UNr0-erfR-R3g1Qf
   
  "/dev/sdb1" is a new physical volume of "8.00 GB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name               
  PV Size               8.00 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               zwacOP-OyHn-4pwQ-icjA-wv3B-X1wY-oaUgPD
   
r...@portwise:~# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/portwise/root
  VG Name                portwise
  LV UUID                IqBJtL-NXIt-igT3-tAOG-PGWi-B6d4-V8d5k3
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.35 GB
  Current LE             1881
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Logical volume ---
  LV Name                /dev/portwise/swap_1
  VG Name                portwise
  LV UUID                Nsrnjg-0VL3-48u1-rZ4i-MV4t-mj2z-j6uouw
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                388.00 MB
  Current LE             97
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

As the reporter already noticed, in grub_lvm_memberlist pv->disk is null
for one of the member disks. In the equivalent case in raid.c
(grub_raid_memberlist) this condition is tested for. The attached patch
adds the same test to lvm.c which fixes the problem. I have not
determined the reason why pv->disk is null so this might be an incorrect
solution, but it is a workaround which enabled me to upgrade my kernel.

** Attachment added: "971_lvm_ignore_pv_wo_disk.diff"
   http://launchpadlibrarian.net/37401639/971_lvm_ignore_pv_wo_disk.diff

-- 
grub-probe crashed with SIGSEGV in __libc_start_main()
https://bugs.launchpad.net/bugs/444829
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to