--- Greg 'groggy' Lehey <[EMAIL PROTECTED]> wrote:
> On Sunday, 29 October 2006 at 23:05:32 -0800, R. B. Riddick wrote:
> > I did it that way in my graid5 class:
> > http://home.tiscali.de/cmdr_faako/geom_raid5.tbz
>
> I would have taken a look at it if the sources had been directly web
> viewable
On Sunday, 29 October 2006 at 23:05:32 -0800, R. B. Riddick wrote:
> --- Greg 'groggy' Lehey <[EMAIL PROTECTED]> wrote:
>> "Sufficiently large data blocks" equates to several megabytes.
>> Currently MAXPHYS, the largest transfer request that would get to the
>> bio layer, is 131072 bytes. This wou
--- Greg 'groggy' Lehey <[EMAIL PROTECTED]> wrote:
> "Sufficiently large data blocks" equates to several megabytes.
> Currently MAXPHYS, the largest transfer request that would get to the
> bio layer, is 131072 bytes. This would imply a stripe size of not
> more than 32 kB for a five disk array, w
On Monday, 30 October 2006 at 7:11:29 +0200, Petri Helenius wrote:
> Greg 'groggy' Lehey wrote:
>>
>> Single stream tests aren't very good examples for RAID-5, because it
>> performs writes in two steps: first it reads the old data, then it
>> writes the new data.
>
> If it really does it this way
Greg 'groggy' Lehey wrote:
Single stream tests aren't very good examples for RAID-5, because it
performs writes in two steps: first it reads the old data, then it
writes the new data.
If it really does it this way, instead doing write-only when writing
sufficiently large blocks, that would e
On Sunday, 29 October 2006 at 11:20:33 -0600, Steve Peterson wrote:
> Petri -- thanks for the idea.
It would be a good idea to quote it. Following this thread is almost
impossible.
> I ran 2 dds in parallel; they took roughly twice as long in clock
> time, and had about 1/2 the throughput of the
On Saturday, 28 October 2006 at 22:19:17 +0300, Petri Helenius wrote:
>
> According to my understanding vinum does not overlap requests to
> multiple disks when running in raid5 configuration
Yes, it does. I suspect that gvinum does too.
> so you're not going to achieve good numbers with just "s
Steve Peterson wrote:
I guess the fundamental question is this -- if I have a 4 disk
subsystem that supports an aggregate ~100MB/sec transfer raw to the
underlying disks, is it reasonable to expect a ~5MB/sec transfer rate
for a RAID5 hosted on that subsystem -- a 95% overhead.
Absolutely not,
Steve Peterson wrote:
Petri -- thanks for the idea.
I ran 2 dds in parallel; they took roughly twice as long in clock
time, and had about 1/2 the throughput of the single dd. On my system
it doesn't look like how the work is offered to the disk subsystem
matters.
This is the thing I did wit
Petri -- thanks for the idea.
I ran 2 dds in parallel; they took roughly twice as long in clock
time, and had about 1/2 the throughput of the single dd. On my
system it doesn't look like how the work is offered to the disk
subsystem matters.
# time dd if=/dev/zero of=blort1 bs=1m count=100
According to my understanding vinum does not overlap requests to
multiple disks when running in raid5 configuration so you're not going
to achieve good numbers with just "single stream" tests.
Pete
Steve Peterson wrote:
Eric -- thanks for looking at my issue. Here's a dd reading from one
Eric -- thanks for looking at my issue. Here's a dd reading from one
of the disks underlying the array (the others have basically the same
performance):
$ time dd if=/dev/ad10 of=/dev/null bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 15.322421 secs (684
On 10/27/06 18:03, Steve Peterson wrote:
I recently set up a media server for home use and decided to try the
gvinum raid5 support to learn about it and see how it performs. It
seems slower than I'd expect -- a little under 6MB/second, with about
50 IOs/drive/second -- and I'm trying to unders
I recently set up a media server for home use and decided to try the
gvinum raid5 support to learn about it and see how it performs. It
seems slower than I'd expect -- a little under 6MB/second, with about
50 IOs/drive/second -- and I'm trying to understand why. Any
assistance/pointers would
14 matches
Mail list logo