dent)k
> 0inputs+0outputs (6major+526minor)pagefaults 0swaps
> 27.18user 5.52system 0:52.54elapsed 62%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (6major+527minor)pagefaults 0swaps
> 27.17user 5.50system 0:51.38elapsed 63%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (6major+525minor)pagefaults 0swaps
>
> Justin.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
rote:
> On Sun, Oct 07, 2007 at 11:48:14AM -0400, Justin Piszcz wrote:
>
> > man mount :)
>
> Ah of course.
>
> But those will be more restrictive that what you can specify when you
> make the file-system (because mkfs.xfs can aligned the AGs to suit).
>
--
Raz
-
To
e...
>
> http://linux-raid.osdl.org/index.php/Mdstat
>
> Comments welcome...
>
> David
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at http://vger.kernel.org/maj
-
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 6/22/07, Jon Nelson <[EMAIL PROTECTED]> wrote:
On Thu, 21 Jun 2007, Raz wrote:
> What is your raid configuration ?
> Please note that the stripe_cache_size is acting as a bottle neck in some
> cases.
Well, it's 3x SATA drives in raid5. 320G drives each, and I'm us
s list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 4/16/07, Raz Ben-Jehuda(caro) <[EMAIL PROTECTED]> wrote:
On 4/13/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Saturday March 31, [EMAIL PROTECTED] wrote:
> >
> > 4.
> > I am going to work on this with other configurations, such as raid5's
> > wit
On 4/2/07, Dan Williams <[EMAIL PROTECTED]> wrote:
On 3/30/07, Raz Ben-Jehuda(caro) <[EMAIL PROTECTED]> wrote:
> Please see bellow.
>
> On 8/28/06, Neil Brown <[EMAIL PROTECTED]> wrote:
> > On Sunday August 13, [EMAIL PROTECTED] wrote:
> > > well ...
On 3/31/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Raz Ben-Jehuda(caro) wrote:
> Please see bellow.
>
> On 8/28/06, Neil Brown <[EMAIL PROTECTED]> wrote:
>> On Sunday August 13, [EMAIL PROTECTED] wrote:
>> > well ... me again
>> >
>> > Foll
67648.00 0 67648
sdb 113.00 0.00 67648.00 0 67648
sdc 113.00 0.00 67648.00 0 67648
sdd 128.00131072.00 0.00 131072 0
md1 561.00 0.00135168.00
> 21MB/s is about right for 5-6 disks, when you go to 10 it drops to
> about 5-8MB/s on a PCI system.
Wait, let's say that we have three drives and 1m chunk size. So we read
1M here, 1M there, and 1M somewhere else, and get 2M data and 1M parity
which we check. With five we would read 4M data and 1M parity, but have
4M checked. The end case is that for each stripe we read N*chunk bytes
and verify (N-1)*chunk. In fact the data is (N-1)/N of the stripe, and
the percentage gets higher (not lower) as you add drives. I see no
reason why more drives would be slower, a higher percentage of the bytes
read are data.
That doesn't mean that you can't run out of Bus bandwidth, but number of
drives is not obviously the issue.
--
bill davidsen <[EMAIL PROTECTED]>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
t; in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
capabilty.
meaning :
see if dd'in for each disk in the system seperately reduces the total
throughput.
On 1/18/07, Sevrin Robstad <[EMAIL PROTECTED]> wrote:
I've tried to increase the cache size - I can't measure any difference.
Raz Ben-Jehuda(caro) wrote:
l
If they are on the PCI bus, that is about right, you probably should be
getting 10-15MB/s, but it is about right. If you had each drive on its
own PCI-e controller, then you would get much faster speeds.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of
Bill helllo
I have been working on raid5 performance write throughout.
The whole idea is the access pattern.
One should buffers with respect to the size of stripe.
this way you will be able to eiliminate the undesired reads.
By accessing it correctly I have managed reach a write
throughout wi
SOL.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body
reduction
in performance if deadline line it too long (say 100 ms).
raz
On 7/3/06, Neil Brown <[EMAIL PROTECTED]> wrote:
On Sunday July 2, [EMAIL PROTECTED] wrote:
> Neil hello.
>
> I have been looking at the raid5 code trying to understand why writes
> performance is so
Neil hello.
you say raid5.h:
...
* Whenever the delayed queue is empty and the device is not plugged, we
* move any strips from delayed to handle and clear the DELAYED flag
and set PREREAD_ACTIVE.
...
i do not understand how can one move from delayed if delayed is empty .
thank you
--
Raz
can i increase the write throughout ?
Thank you
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
sed count of active stripes */
+ bi->bi_hw_segments = 0; /* count of processed stripes */
+ }
+
+ return bi;
+}
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo i
need to know which kernel do you want me
to use ? i am using poor 2.6.15.
I thank you
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
op the max=0 case ?
1.2 what these lines mean ? do i need it ?
if (max <= biovec->bv_len && bio_sectors == 0)
return biovec->bv_len;
else
return max;
}
thank you
Raz
-
To unsubscribe from this list: send the line "unsubsc
//
// make upper level to the work for me
//
return 1;
}
...
}
it increased the performance to 440 MB/s.
Question :
What is the cost of not walking trough the raid5 code in the
case of READ ?
if i add and error handlin
ng occurs? In particular, I want to say that the "buffer_head"
> kernel buffer is the specific slab that is used for the caching?
>
> Thanks,
> Jon
>
> On 4/13/06, Raz Ben-Jehuda(caro) <[EMAIL PROTECTED]> wrote:
> > may be lustre
> >
> > On 4/13/06
-- www.harddisk-recovery.com -- +31 70 370 12 90 --
> | Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at http://
ould you explain this ? why I am not getting two 1/2 MB ?
Could it be the slab cache ? ( biovec256)
Thank you
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
t; but still why so small ?
>
> Odd.. When I try that I get 4096 repeatedly.
> Which kernel are you using?
> What does
>blockdev --getbsz /dev/md1
> say?
> Do you have a filesystem mounted on /dev/md1? If so, what sort of
> filesystem.
>
> NeilBrown
>
-
512:512:512:512:512
I suppose they gathered in the elevator,
but still why so small ?
thank you
raz.
On 3/27/06, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Monday March 27, [EMAIL PROTECTED] wrote:
> > i have playing with raid5 and i noticed that the arriving bios sizes
> > ar
i have playing with raid5 and i noticed that the arriving bios sizes
are 1 sector.
why is that and where is it set ?
thank you
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo inf
i am sending 1MB buffer to a raid of 1MB chunk size.
i know that each offset is aligned by the chunk size.
On 3/27/06, Jens Axboe <[EMAIL PROTECTED]> wrote:
>
> don't top-post!
>
> On Mon, Mar 27 2006, Raz Ben-Jehuda(caro) wrote:
> > I know that.
> > Curre
.
I am trying to deal with this problem by fixing the deadline elevator code
to batch IOs , meaning , when n IOs are reaching the disk, each m deadlined
IOs are sorted and then dispatced.
I would appreciate any coments in this matter.
thank you
Raz Ben Yehuda
-
To unsubscribe from this list: send
dom access
to the disk ?
2. Does direct IO passes this cache ?
3. How can a dd of 1 MB over 1MB chunck size acheive this high
throughputs of 4 disks
even if does not get the stripe cache benifits ?
thank you
raz.
On 3/7/06, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Monday March
it reads raw. no filesystem whatsover.
On 3/6/06, Gordon Henderson <[EMAIL PROTECTED]> wrote:
> On Mon, 6 Mar 2006, Raz Ben-Jehuda(caro) wrote:
>
> > Neil Hello .
> > I have a performance question.
> >
> > I am using raid5 stripe size 1024K over 4 dis
g a single disk , but by looking at iostat i can
see that all
disks are active but with low throughput.
Any idea ?
Thank you.
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vg
or the full-stroke random case.
> Local area workloads need to be analyzed more thoroughly, and may
> differ in performance gain by manufacturer.
>
> --eric
>
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Thank you Mr Garzik.
Is there a list of all drivers and there features they give ?
Raz.
On 3/2/06, Jeff Garzik <[EMAIL PROTECTED]> wrote:
> Jens Axboe wrote:
> > (don't top post)
> >
> > On Thu, Mar 02 2006, Raz Ben-Jehuda(caro) wrote:
> >
> >>i can
l support NCQ when AHCI does
>
> Great! The sane choice, for both producer and consumer.
>
> > * slight correction to the above: sil24 will do NCQ, I don't think sil does
>
> Ok, it was more of an umbrella sil label, I haven't looked into specific
> models.
>
> --
> Jens Axboe
>
>
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
NBD for network block device ?
why do u use it ?
what type of elevator do you use ?
On 1/10/06, JaniD++ <[EMAIL PROTECTED]> wrote:
>
> - Original Message -----
> From: "Raz Ben-Jehuda(caro)" <[EMAIL PROTECTED]>
> To: "JaniD++" <[EMAIL PROT
, JaniD++ <[EMAIL PROTECTED]> wrote:
>
> - Original Message -----
> From: "Raz Ben-Jehuda(caro)" <[EMAIL PROTECTED]>
> To: "JaniD++" <[EMAIL PROTECTED]>
> Cc: "Linux RAID Mailing List"
> Sent: Wednesday, January 04, 2006 2:49 PM
>
inal Message -----
> From: "Raz Ben-Jehuda(caro)" <[EMAIL PROTECTED]>
> To: "Mark Hahn" <[EMAIL PROTECTED]>
> Cc: "Linux RAID Mailing List"
> Sent: Wednesday, January 04, 2006 9:14 AM
> Subject: Re: raid5 read performance
>
>
> >
; multiple outstanding reads?
>
> > Is it because it does parity checkings ?
>
> non-degraded R5 doesn't do parity checks on reads, afaik.
>
>
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
I am checking raid5 performance.
I am using asynchronous ios with buffer size as the stripe size.
In this case i am using a stripe size of 1M with 2+1 disks.
Unlike raid0 , raid5 drops the performance by 50% .
Why ?
Is it because it does parity checkings ?
thank you
--
Raz
-
To unsubscribe from
what "wrt" stands for ?
On 12/29/05, Mark Overmeer <[EMAIL PROTECTED]> wrote:
> * Raz Ben-Jehuda(caro) ([EMAIL PROTECTED]) [051229 10:10]:
> > I have tested the overhead of linux raid0.
> > I used two scsi atlas maxtor disks ( 147 MB) and combined them to single
&g
/dev/md3
> raid-disk 2
> device /dev/md4
> raid-disk 3
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
r personality registered as nr 1
md: raid0 personality registered as nr 2
md: raid1 personality registered as nr 3
md: raid5 personality registered as nr 4
On 11/21/05, Jeff Garzik <[EMAIL PROTECTED]> wrote:
> On Mon, Nov 21, 2005 at 10:15:11AM -0800, Raz Ben-Jehuda(caro) wrote:
> > Well , i
Well , i have tested the disk with a new tester i have written. it seems that
the ata driver causes the high cpu and not raid.
On 11/21/05, Raz Ben-Jehuda(caro) <[EMAIL PROTECTED]> wrote:
> What sort of a test is it ? what filesystem ?
> I am reading concurrently 50 files .
> Are
What sort of a test is it ? what filesystem ?
I am reading concurrently 50 files .
Are you reading one file , several files ?
On 11/21/05, Guy <[EMAIL PROTECTED]> wrote:
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:linux-raid-
> > [EMAIL PROTECT
TED] [mailto:linux-raid-
> > [EMAIL PROTECTED] On Behalf Of Raz Ben-Jehuda(caro)
> > Sent: Sunday, November 20, 2005 6:50 AM
> > To: Linux RAID Mailing List
> > Subject: comparing FreeBSD to linux
> >
> > I have evaluated which is better in terms cpu load whe
i have an intel ich6 ide controller chipset revision 3.
this is the obnly chipset i found in dmesg.
On 11/20/05, Lajber Zoltan <[EMAIL PROTECTED]> wrote:
> Hi,
>
> On Sun, 20 Nov 2005, Raz Ben-Jehuda(caro) wrote:
>
> > I am using sata maxtor disks over raid0. i am using
&g
ant to know before i dive into the raid code that it is realy a bug.
On 11/14/05, Ross Vandegrift <[EMAIL PROTECTED]> wrote:
> On Mon, Nov 14, 2005 at 09:27:25PM +0200, Raz Ben-Jehuda(caro) wrote:
> > I have made the following test with my raid5:
> > 1. created raid5 with 4 s
that it rejects the dirty disk.
Anyone ?
--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
how is the disks performance ? is it OK ?
On Thu, 2005-08-04 at 18:13 +0200, [EMAIL PROTECTED] wrote:
> Yes, it is there, I know it.
> But its only for resync or not? :-)
>
> - Original Message -----
> From: "Raz Ben Jehuda" <[EMAIL PROTECTED]>
> To: <[E
take a look at /proc/sys/dev/raid/speed_limit_max . it is kilobytes if i
recall correctly.
On Thu, 2005-08-04 at 17:45 +0200, [EMAIL PROTECTED] wrote:
> Thanks a lot for you, and Raz!
>
> The raw devices readahead I already set with the hdparm, and
> /sys/block/*/queue/read_ahead_kb
r helping!
>
> Janos
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Raz
Long live the penguin
-
To unsubscribe f
"unlock the drive" ?
On Wed, 2005-08-03 at 10:26 -0400, Mike Dresser wrote:
> On Tue, 2 Aug 2005, Raz Ben-Jehuda(caro) wrote:
>
> > i have encountered a weired feature of 3ware raid.
> > When i try to put inside an existing raid a disk which
> > belonged
I know that some raid management information is save on the disks.
But I am using 250 GB disks at minimum dd'ing this amount
would be too long. Has know the exact position of where to
dd ?
On Tue, 2005-08-02 at 13:27 -0700, Jason Leach wrote:
> Raz:
>
> The 3ware (at least my 9500
i have encountered a weired feature of 3ware raid.
When i try to put inside an existing raid a disk which
belonged to a different 3ware raid if fail.
Any idea anyone ?
--
Raz
Long Live the Penguin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of
58 matches
Mail list logo