** Changed in: debian
Status: Unknown => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/292302
Title:
Raidset stays inactive due to wrong # of devices
To manage notifications abo
I am using Ubuntu 10.04 Kernel 2.6.32-22-generic
Asus K8V SE Deluxe BIOS: AMIBIOS Version 08.00.09 ID: A0058002
Promise-Controller deaktivated
4 IDE-harddisks (Samsung SP1634N) configured as RAID-0 connected via the VIA
VT8237 controller
All harddisks are show in BIOS identicaly
I created the RA
This was fixed in a Debian version released which appeared in Ubuntu
Jaunty. Seeing as this bug has not been touch in well over a year, I
will assume that it was fixed in the Debian release and mark this as Fix
Released. Of course, if this is still causing anyone problems in recent
releases, please
Sorry, the output of dmraid -ay in the initramfs console was incorrect. The
correct is:
# dmraid -ay
RAID set "isw_baeaijeeda_cero" already active
RAID set "isw_baeaijeeda_cinco" already active
ERROR: adding /dev/mapper/isw_baeaijeeda_cinco to raid set
RAID set "isw_baeaijeeda_cero2"
Now I have tested 3 dmraid versions: ubuntu repository, Phillip Susi,
and Giuseppe Iuculano.
When I chroot from Ubuntu 8.04 to Ubuntu 8.10 root partition everything
seems ok, dmraid -ay has normal output (the same behavior with the 3
dmraid versions).
But when I try to boot Ubuntu 8.10 these erro
My system: intel ICH9R, 4 hard disks, two raid arrays (raid0 and raid5).
Ubuntu 8.04 installed on raid0 array works well with dmraid
1.0.0.rc14-0ubuntu3.1, capable of read/write raid0 and raid5.
Ubuntu 8.10 intrepid could not boot because dmraid print this error 8 times:
ERROR: isw device for vol
(Fix Committed -> Triaged since this has not yet been committed
somewhere that would result in it being in the next Ubuntu upload;
though I've drawn this bug to Luke's attention.)
** Changed in: dmraid (Ubuntu)
Status: Fix Committed => Triaged
--
Raidset stays inactive due to wrong # of d
I tested various version for installing ubuntu on my fakeraid system with Intel
icw and RAID1.
I went through nearly every single error described here ...
Neither intrepid nor jaunty seemed to work.
My solution was to use dmraid 1.0.0.rc14-2ubuntu13 as Phil Susi described, and
then pin that vers
Thanks!
After dmraid -rE /dev/sda and dmraid -rE /dev/sdb.
I create a new raid array on BIOS.
Bootup by Live CD. install dmraid..
The raid array can work!
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you ar
Peter Hong ha scritto:
> Note:
> I only create a raid 0 array(isw_bcdagehgbe_Volume0) in my BIOS setting.
> isw_cjgfhdfgic_SDD3 << This setting was be delete on BIOS,but it still can
> find in dmraid.
Try dmraid -rE /dev/sda and dmraid -rE /dev/sdb
Note that this command will erase *all* raid
Sorry, ubuntu13 is my mistake.
I try the version(1.0.0.rc14-2ubuntu12.2).
It still can't work .
The output as follow:
r...@ubuntu:/etc/apt# dmraid --version
dmraid version: 1.0.0.rc14 (2006.11.08) shared
dmraid library version: 1.0.0.rc14 (2006.11.08
Probably from Philip Susi here:
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/292302/comments/6
But you should try the lastest one from Giuseppe...
Juel
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because
Peter Hong ha scritto:
> Hi,
>
> I have a big problem.
>
> I create a RAID 1(mirror) and install Fedora on it.
> Now, I want change the OS to ubuntu 8.10. and change the RAID setting to RAID
> 0.
>
> When I download the dmraid version(1.0.0.rc14-2ubuntu13) and do" dmraid -ay"
ubuntu13? Where
Hi,
I have a big problem.
I create a RAID 1(mirror) and install Fedora on it.
Now, I want change the OS to ubuntu 8.10. and change the RAID setting to RAID 0.
When I download the dmraid version(1.0.0.rc14-2ubuntu13) and do" dmraid -ay"
The RAID doesn't work.
ubu...@ubuntu:~$ sudo dmraid -ay
ER
Great! debdiff attached.
Giuseppe
** Attachment added: "Fix #292302 and #267953"
http://launchpadlibrarian.net/20226091/dmraid_1.0.0.rc14-2ubuntu12.2.debdiff
** Changed in: dmraid (Ubuntu)
Status: In Progress => Fix Committed
--
Raidset stays inactive due to wrong # of devices
https:
Nice!
I can confirm that everythings still ok with your new package here.
All raidsets become active and are working.
Well done, Juel
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
B
Ok, so we need an ack from Juel.
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https:/
It works fine, I did an apt-get install. I also updated libdmraid
There were no initramfs triggers, so I generated it myself. It boots
and I can use the disk.
[EMAIL PROTECTED]:/home/x$ dpkg -l | grep dmraid
ii dmraid1.0.0.rc14-2ubuntu12.2
Hi,
I've prepared a package, can you try it please?
echo "deb http://ppa.launchpad.net/giuseppe-iuculano/ubuntu intrepid main" >>
/etc/apt/sources.list
apt-get update
apt-get install dmraid=1.0.0.rc14-2ubuntu12.2
Giuseppe.
--
Raidset stays inactive due to wrong # of devices
https://bugs.laun
Since the other bug was marked as fixed, and it's subject was the
removal of the raid10 patch from rc14, I'd say file a new bug with
details on what goes wrong with rc15 in Jaunty.
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug not
These are the bugs that I have open or messaged to. 276095 explains
my experience with rc15. I can open another bug for rc15 if you want.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=494278
https://bugs.launchpad.net/bugs/276095
https://bugs.launchpad.net/bugs/292302
On Tue, Dec 2, 2008 at
indy2718 wrote:
> I tried rc15 before and it didn't work, and I just tried jaunty dmraid
> rc15. 'Could not find metadata' when I boot and try dmraid -ay at initram.
>
> https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/276095
>
> At worst case, I can keep a local copy of a dmraid package th
Phillip Susi wrote:
> Unfortunately fixing the raid10 patch is a lot more complicated than I
> thought so I have given up. The isw raid10 support apparently was
> implemented differently in rc15 and works properly so I suggest just
> backporting that.
>
>
Hello, thank you for the attempt.
I
Unfortunately fixing the raid10 patch is a lot more complicated than I
thought so I have given up. The isw raid10 support apparently was
implemented differently in rc15 and works properly so I suggest just
backporting that.
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchp
This solution worked for me ->
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/292302/comments/6
But now if have the problem with mounting my NTFS Filesystem. Does
anybody has the same problem?
I get back this errors:
$MFT has invalid magic.
Failed to load $MFT: Input/output error
Failed to
yonish, your issue does not appear to be related to this one. It looks
like your sdb has both sil and isw metadata on it and dmraid is using
the isw, but the other disk is presumably sil. If you aren't using an
Intel Matrix Storage controller then you need to erase the isw metadata
with sudo dmra
I tried downgrading and I can't understand whether the downgrade didn't work or
It's just not working ;
[EMAIL PROTECTED]:~$ sudo dmraid -ay
/dev/sdb: "sil" and "isw" formats discovered (using isw)!
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set
"isw_baiacbfgeh_Volume0"
ER
# sudo fdisk -lu /dev/sda
Warning: invalid flag 0x of partition table 5 will be corrected by w(rite)
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x7e7498c6
Device B
Hrm... strange, can you post the output of sudo fdisk -lu /dev/sda?
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
u
metadata attached
** Attachment added: "iswdat.tar"
http://launchpadlibrarian.net/19435220/iswdat.tar
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
indy, those files do not appear to contain metadata for some reason.
Try this instead:
sudo dd if=/dev/sda of=sda_isw.dat skip=976773165 bs=512
Repeat for each disk.
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification bec
Hello!
The new version (dmraid 1.0.0.rc14-2ubuntu13) works fine for me now
(following the steps above):
[EMAIL PROTECTED]:~$ sudo dmraid -ay
RAID set "isw_dejcdcjhf_Storage" already active
RAID set "isw_dejcdcjhf_Video_Storage" already active
RAID set "isw_ecbdhhhfe_Linux" already active
RAID se
Adding dmraid -rD outputs
** Attachment added: "isw.tar"
http://launchpadlibrarian.net/19393215/isw.tar
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
I installed the package but didn't reboot.
[EMAIL PROTECTED]:/home/x# dmraid -rD
/dev/sdd: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sdc: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sdb: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sd
Could you attach the generated output files of dmraid -rD?
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bug
Hello, I was the user that the raid 10 patch was re-added for. I tried
this new dmraid using apt-get on your repository and it doesn't work for
me. I am running a custom kernel, 2.6.27 with the latest ubuntu
intrepid. Core 2.
The screenshot is during bootup, it bails to the initram prompt.
My ra
Cheers mate, well done!
Works nice and stable :)
Get:1 http://ppa.launchpad.net intrepid/main libdmraid1.0.0.rc14
1.0.0.rc14-2ubuntu13 [80.9kB]
Get:2 http://ppa.launchpad.net intrepid/main dmraid 1.0.0.rc14-2ubuntu13
[28.5kB]
Fetched 109kB in 0s (178kB/s)
Selecting previously deselected package
Ok, to use my test package add the following to your sources.list:
deb http://ppa.launchpad.net/psusi/ubuntu intrepid main
deb-src http://ppa.launchpad.net/psusi/ubuntu intrepid main
Then when you install or upgrade dmraid ( don't forget to apt-get update
after changing sources.list ) you should
Thats great news, will be happy to test it then as soon as I get some
spare time..
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs
I have looked at the patch and the problem appears to be the changes it
makes to name(). Originally name() is passed the isw_dev it should
operate on, which corresponds to the raid volume. The patch changes it
to be passed the raid_dev and then it finds the isw_dev itself from the
raid dev, only
Good catch, it appears this is caused by this:
dmraid (1.0.0.rc14-2ubuntu9) intrepid; urgency=low
* debian/control: dmraid and dmraid-udeb should depend on dmsetup and
dmsetup-udeb respecitvely, to ensure UUID symlinks are correctly
created.
* debian/patches/07_isw-raid10-nested.dpatc
Thats probably it, thx!
Downgrading to 1.0.0.rc14-0ubuntu3 from hardy solves the problem for now..
--
Raidset stays inactive due to wrong # of devices
https://bugs.launchpad.net/bugs/292302
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I think this is related with #494278 debian bug, it seems that 07_isw-
raid10-nested.dpatch causes this issue.
** Bug watch added: Debian Bug tracker #494278
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=494278
** Also affects: debian via
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug
43 matches
Mail list logo