also sprach Henrique de Moraes Holschuh [2009.04.30.1925
+0200]:
> Where can I read about that, before I freak out needlesly? :-)
In the ANNOUNCE files and the upstream changelog, both of which are
not included in the Debian package by result of some weird chain of
events.
Try
http://git.de
On Thu, 30 Apr 2009, martin f krafft wrote:
> also sprach Henrique de Moraes Holschuh [2009.04.30.1615
> +0200]:
> > 1.0 superblocks are widely used. Please don't do that. Either
> > implement support for both, or use mdadm (which knows both).
> >
> > This kind of stuff really should not be do
On Thu, 30 Apr 2009, Boyd Stephen Smith Jr. wrote:
> He who codes, decides. Either put forth the effort to
> design/write/review/test/apply the patch or don't be surprised if your
> preferences are not highly weighted in the resulting code.
Will lvm upstream take something that makes lvm align
In <20090430141527.gc28...@khazad-dum.debian.net>, Henrique de Moraes Holschuh
wrote:
>On Wed, 29 Apr 2009, Boyd Stephen Smith Jr. wrote:
>> In <20090429192819.gb1...@khazad-dum.debian.net>, Henrique de Moraes
>> Holschuh wrote:
>> >On Wed, 29 Apr 2009, martin f krafft wrote:
>> >> One should thus
also sprach Henrique de Moraes Holschuh [2009.04.30.1615
+0200]:
> 1.0 superblocks are widely used. Please don't do that. Either
> implement support for both, or use mdadm (which knows both).
>
> This kind of stuff really should not be done halfway, it can
> suprise someone into a dataloss sce
On Wed, 29 Apr 2009, Boyd Stephen Smith Jr. wrote:
> In <20090429192819.gb1...@khazad-dum.debian.net>, Henrique de Moraes
> Holschuh wrote:
> >On Wed, 29 Apr 2009, martin f krafft wrote:
> >> also sprach Henrique de Moraes Holschuh [2009.04.29.1522
> +0200]:
> >> > As always, you MUST forbid lvm
In <20090429192819.gb1...@khazad-dum.debian.net>, Henrique de Moraes
Holschuh wrote:
>On Wed, 29 Apr 2009, martin f krafft wrote:
>> also sprach Henrique de Moraes Holschuh [2009.04.29.1522
+0200]:
>> > As always, you MUST forbid lvm of ever touching md component
>> > devices even if md is offli
On Wed, 29 Apr 2009, martin f krafft wrote:
> also sprach Henrique de Moraes Holschuh [2009.04.29.1522
> +0200]:
> > As always, you MUST forbid lvm of ever touching md component
> > devices even if md is offline, and that includes whatever crap is
> > inside initrds...
>
> One should thus fix LV
also sprach martin f krafft [2009.04.29.1847 +0200]:
> Absolutely. I've put Neil Brown, upstream mdadm on Bcc so he can
> pitch in if this is something he'd implement or accept patches for.
On second thought, there *is* the sysfs interface, but I don't think
it exposes md-specific information unl
also sprach Boyd Stephen Smith Jr. [2009.04.29.1808
+0200]:
> I'm down with LVM running something like:
> mdadm --has-superblock /dev/block/device
> for devices that have a PV header and refusing to automatically treat them
> as PVs if it returns success, as long as it doesn't affect md-on-LVM.
In <20090429141142.ga19...@piper.oerlikon.madduck.net>, martin f krafft
wrote:
>also sprach Boyd Stephen Smith Jr. [2009.04.29.1557
+0200]:
>> >One should thus fix LVM to be a bit more careful...
>>
>> LVM allows you to strictly limit what devices it scans for PV headers.
>
>That's not enough; L
also sprach Boyd Stephen Smith Jr. [2009.04.29.1557
+0200]:
> >One should thus fix LVM to be a bit more careful...
>
> LVM allows you to strictly limit what devices it scans for PV headers.
That's not enough; LVM knows that md exists, and LVM-on-md is about
99.8% of the sane use-cases, so L
In <20090429134916.gb17...@piper.oerlikon.madduck.net>, martin f krafft wrote:
>also sprach Henrique de Moraes Holschuh [2009.04.29.1522
+0200]:
>> As always, you MUST forbid lvm of ever touching md component
>> devices even if md is offline, and that includes whatever crap is
>> inside initrds..
also sprach Henrique de Moraes Holschuh [2009.04.29.1522
+0200]:
> As always, you MUST forbid lvm of ever touching md component
> devices even if md is offline, and that includes whatever crap is
> inside initrds...
One should thus fix LVM to be a bit more careful...
--
.''`. martin f. kraf
On Tue, 21 Apr 2009, Alex Samad wrote:
> > Learned my lesson though - no real reason to have root on lvm - it's now
> > on 3-disk RAID 1.
>
> all ways thought this, KISS
Exactly. I have servers with 4, sometimes 6-disk RAID1 root partitions,
because of KISS: all disks in the raid set should be
Alex Samad wrote:
On Mon, Apr 20, 2009 at 08:03:38PM -0400, Miles Fidelman wrote:
I just got badly bit by this. I had root on lvm on md (RAID 1). After
one of the component drives died, lvm came back up on top of the other
component drive - during boot from initrd - making it impossible
On Mon, Apr 20, 2009 at 08:03:38PM -0400, Miles Fidelman wrote:
> Alex Samad wrote:
>> On Mon, Apr 20, 2009 at 08:26:21PM +0100, Seri wrote:
>>
>>> Hoping somebody might be able to provide me with some pointers that
>>> may just help me recover a lot of data, a home system with no backups
>>> bu
Alex Samad wrote:
On Mon, Apr 20, 2009 at 08:26:21PM +0100, Seri wrote:
Hoping somebody might be able to provide me with some pointers that
may just help me recover a lot of data, a home system with no backups
but a lot of photos, yes I know the admin rule, backup backup backup,
but I ran out
On Mon, Apr 20, 2009 at 08:26:21PM +0100, Seri wrote:
> Hoping somebody might be able to provide me with some pointers that
> may just help me recover a lot of data, a home system with no backups
> but a lot of photos, yes I know the admin rule, backup backup backup,
> but I ran out of backup space
Hoping somebody might be able to provide me with some pointers that
may just help me recover a lot of data, a home system with no backups
but a lot of photos, yes I know the admin rule, backup backup backup,
but I ran out of backup space (not a good excuse).
I saw a few months back that somebody d
On Fri, 16 Jan 2009 04:44:11 +1100, "Alex Samad"
said:
> On Wed, Jan 14, 2009 at 07:45:24PM -0800, whollyg...@letterboxes.org
> wrote:
> >
> > I wonder if that would have helped with the larger drives. Too late:)
> > The smaller drives shouldn't have been bad. All I did to them was fail
>
>
On Wed, Jan 14, 2009 at 07:45:24PM -0800, whollyg...@letterboxes.org wrote:
> On Tue, 13 Jan 2009 15:07:37 +1100, "Alex Samad"
> said:
> > On Mon, Jan 12, 2009 at 07:46:08PM -0800, whollyg...@letterboxes.org
> > wrote:
> > >
[snip]
>
> I wonder if that would have helped with the larger drives.
On Tue, 13 Jan 2009 15:07:37 +1100, "Alex Samad"
said:
> On Mon, Jan 12, 2009 at 07:46:08PM -0800, whollyg...@letterboxes.org
> wrote:
> >
> > On Fri, 09 Jan 2009 10:45:56 +, "John Robinson"
> > said:
> > > On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
>
> [snip]
>
> >
> > But, t
On Mon, Jan 12, 2009 at 07:46:08PM -0800, whollyg...@letterboxes.org wrote:
>
> On Fri, 09 Jan 2009 10:45:56 +, "John Robinson"
> said:
> > On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
[snip]
>
> But, this has all become moot anyway. When I put the original, smaller
> drives bac
On Fri, 09 Jan 2009 10:45:56 +, "John Robinson"
said:
> On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
> > But anyway, I don't think that is going to matter. The issue I am
> > trying to
> > solve is how to de-activate the bitmap. It was suggested on the
> > linux-raid
> > list tha
On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
But anyway, I don't think that is going to matter. The issue I am
trying to
solve is how to de-activate the bitmap. It was suggested on the
linux-raid
list that my problem may have been caused by running the grow op on an
active
bitmap an
On Thu, 8 Jan 2009 21:12:18 +1100, "Alex Samad"
said:
> On Wed, Jan 07, 2009 at 08:19:05PM -0800, whollyg...@letterboxes.org
> wrote:
> >
> > On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" said:
>
> [snip]
>
> > How should I have done the grow operation if not as above? The only
> > thing I
On Wed, Jan 07, 2009 at 08:19:05PM -0800, whollyg...@letterboxes.org wrote:
>
> On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" said:
[snip]
> How should I have done the grow operation if not as above? The only
> thing I see in man mdadm is the "-S" switch which seems to disassemble
> the arra
On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" said:
> On Monday January 5, jpis...@lucidpixels.com wrote:
> > cc linux-raid
> >
> > On Mon, 5 Jan 2009, whollyg...@letterboxes.org wrote:
> >
> > >
[snip]
> > > The RAID reassembled fine at each boot as the drives
> > > were replaced one by o
On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" said:
> On Monday January 5, jpis...@lucidpixels.com wrote:
> > cc linux-raid
> >
> > On Mon, 5 Jan 2009, whollyg...@letterboxes.org wrote:
> >
> > > I think growing my RAID array after replacing all the
> > > drives with bigger ones has somehow h
On Monday January 5, jpis...@lucidpixels.com wrote:
> cc linux-raid
>
> On Mon, 5 Jan 2009, whollyg...@letterboxes.org wrote:
>
> > I think growing my RAID array after replacing all the
> > drives with bigger ones has somehow hosed the array.
> >
> > The system is Etch with a stock 2.6.18 kernel
cc linux-raid
On Mon, 5 Jan 2009, whollyg...@letterboxes.org wrote:
I think growing my RAID array after replacing all the
drives with bigger ones has somehow hosed the array.
The system is Etch with a stock 2.6.18 kernel and
mdadm v. 2.5.6, running on an Athlon 1700 box.
The array is 6 disk (5
I think growing my RAID array after replacing all the
drives with bigger ones has somehow hosed the array.
The system is Etch with a stock 2.6.18 kernel and
mdadm v. 2.5.6, running on an Athlon 1700 box.
The array is 6 disk (5 active, one spare) RAID 5
that has been humming along quite nicely f
33 matches
Mail list logo