Same issues on Karmic server after adding a new disk to LVM VG that was
created at install. When Grub updates during apt-get it segfaults with
error 139.
Adding the extra disk to the device.map as per stiV's comment has fixed
this issue. apt-get now completes successfully and grub.cfg is generated
ok ... i just solved my problem by manually adding the following line to
/boot/grub/device.map
(hd1) /dev/sdb
so now the file has both hdds
(hd0) /dev/sda
(hd1) /dev/sdb
and everything works like expected now
--
grub-probe crashed with SIGSEGV in __libc_start_main()
https://bugs.launchpa
I am still having this problem - exactly as described here. Installed
ubuntu using LVM and added another physical harddrive to the
volumegroup. Everything worked, except for the latest kernel update:
Setting up linux-image-2.6.31-20-generic (2.6.31-20.57) ...
Running depmod.
update-initramfs: Gene
Thanks for your report, and thanks to Mattias for your patch. I believe
that this is the same as something which has been fixed a bit
differently upstream and hence in Lucid. Here are the references:
http://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu/lucid/grub2/lucid/revision/1855.8.174
h
This is repeatable on a fresh installation of Karmic installed used
guided partitioning with lvm and allocating the entire first disk to
lvm. Then I add a second disk to the root volumegroup which causes grub
to segfault.
In my case, I ran this on a guest in VMware ESX.
r...@portwise:~# grub-prob
Some additional information:
Backtrace:
#0 grub_lvm_memberlist (disk=0x807f008) at
/home/claudio/Build/grub2-1.97~beta4/disk/lvm.c:72
#1 0x080492e1 in probe (path=0x0, device_name=0xb93a
"/dev/mapper/Ubuntu-root") at
/home/claudio/Build/grub2-1.97~beta4/util/grub-probe.c:164
#2 0x080498