>>> On Sun, 30 Dec 2007 19:00:39 -0500, Brad Langhorst
>>> <[EMAIL PROTECTED]> said:
[ ... VMware virtual disks over RAID ... ]
brad> - 4 disk raid 10
brad> - 64k stripe size
Stripe size or chunk size? Try reducing the chunk size if that
is the chunk size, and applications in the VM do short r
Peter Grandi wrote:
In particular if one uses parity-based (not a good idea in
general...) arrays, as small chunk sizes (as well as stripe
sizes) give a better chance of reducing the frequency of RMW.
Thanks for your thoughts - the above was my thinking when I posted.
Regards,
Richard
-
To u
- Message from [EMAIL PROTECTED] -
Date: Mon, 31 Dec 2007 12:02:14 -0500 (EST)
From: Justin Piszcz <[EMAIL PROTECTED]>
Reply-To: Justin Piszcz <[EMAIL PROTECTED]>
Subject: Re: Change Stripe size?
To: Greg Cormier <[EMAIL PROTECTED]>
Cc: linux-raid@vger.kernel.org
On Mon, 31 Dec 2007, Greg Cormier wrote:
So I've been slowly expanding my knowledge of mdadm/linux raid.
I've got a 1 terabyte array which stores mostly large media files, and
from my reading, increasing the stripe size should really help my
performance
Is there any way to do this to an exis
So I've been slowly expanding my knowledge of mdadm/linux raid.
I've got a 1 terabyte array which stores mostly large media files, and
from my reading, increasing the stripe size should really help my
performance
Is there any way to do this to an existing array, or will I need to
backup the array
Michael Tokarev wrote:
Neil Brown wrote:
On Monday December 31, [EMAIL PROTECTED] wrote:
I'm hoping that if I can get raid5 to continue despite the errors, I
can bring back up enough of the server to continue, a bit like the
remount-ro option in ext2/ext3.
If not, oh well...
So
>> Why does mdadm still use 64k for the default chunk size?
> Probably because this is the best balance for average file
> sizes, which are smaller than you seem to be testing with?
Well "average file sizes" relate less to chunk sizes than access
patterns do. Single threaded sequential reads with
Justin Piszcz wrote:
Dave's original e-mail:
# mkfs.xfs -f -l lazy-count=1,version=2,size=128m -i attr=2 -d
agcount=4
# mount -o logbsize=256k
And if you don't care about filsystem corruption on power loss:
# mount -o logbsize=256k,nobarrier
Those mkfs values (except for log size
Neil Brown wrote:
> On Monday December 31, [EMAIL PROTECTED] wrote:
>> I'm hoping that if I can get raid5 to continue despite the errors, I
>> can bring back up enough of the server to continue, a bit like the
>> remount-ro option in ext2/ext3.
>>
>> If not, oh well...
>
> Sorry, but it is "oh wel
On Dec 31, 2007 1:05 PM, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Monday December 31, [EMAIL PROTECTED] wrote:
> >
> > I'm hoping that if I can get raid5 to continue despite the errors, I
> > can bring back up enough of the server to continue, a bit like the
> > remount-ro option in ext2/ext3.
>
Ok, since my previous thread didn't seem to attract much attention,
let me try again.
An interrupted RAID5 reshape will cause the md device in question to
contain one corrupt chunk per stripe if resumed in the wrong manner.
A testcase can be found at http://www.nagilum.de/md/ .
The first testcase
On Monday December 31, [EMAIL PROTECTED] wrote:
>
> I'm hoping that if I can get raid5 to continue despite the errors, I
> can bring back up enough of the server to continue, a bit like the
> remount-ro option in ext2/ext3.
>
> If not, oh well...
Sorry, but it is "oh well".
I could probably mak
Howdy,
Sorry for the direct Ccs, I'm not sure if my Email to linux-raid will
make it or not.
Long story short, my main server just died with a double raid failure
today, and I'm on vacation on the other side of the world.
One drive is dead for good, the other one generates an error when I
read at
13 matches
Mail list logo