Robert Milkowski writes:
> Hello Roch,
>
> Saturday, June 28, 2008, 11:25:17 AM, you wrote:
>
>
> RB> I suspect, a single dd is cpu bound.
>
> I don't think so.
>
We're nearly so as you show. More below.
> Se below one with a stripe of 48x disks again. Single dd with 1024k
> blo
Hello Robert,
Tuesday, July 1, 2008, 12:01:03 AM, you wrote:
RM> Nevertheless the main issu is jumpy writing...
I was just wondering how much thruoughput I can get running multiple
dd - one per disk drive and what kind of aggregated throughput I would
get.
So for each out of 48 disks I did:
d
Hello Roch,
Saturday, June 28, 2008, 11:25:17 AM, you wrote:
RB> I suspect, a single dd is cpu bound.
I don't think so.
Se below one with a stripe of 48x disks again. Single dd with 1024k
block size and 64GB to write.
bash-3.2# zpool iostat 1
capacity operationsbandwid
Le 28 juin 08 à 05:14, Robert Milkowski a écrit :
> Hello Mark,
>
> Tuesday, April 15, 2008, 8:32:32 PM, you wrote:
>
> MM> The new write throttle code put back into build 87 attempts to
> MM> smooth out the process. We now measure the amount of time it
> takes
> MM> to sync each transaction g
Hello Mark,
Tuesday, April 15, 2008, 8:32:32 PM, you wrote:
MM> The new write throttle code put back into build 87 attempts to
MM> smooth out the process. We now measure the amount of time it takes
MM> to sync each transaction group, and the amount of data in that group.
MM> We dynamically resiz
Bob Friesenhahn writes:
> On Tue, 15 Apr 2008, Mark Maybee wrote:
> > going to take 12sec to get this data onto the disk. This "impedance
> > mis-match" is going to manifest as pauses: the application fills
> > the pipe, then waits for the pipe to empty, then starts writing again.
> > Note t
Hello Mark,
Tuesday, April 15, 2008, 8:32:32 PM, you wrote:
MM> ZFS has always done a certain amount of "write throttling". In the past
MM> (or the present, for those of you running S10 or pre build 87 bits) this
MM> throttling was controlled by a timer and the size of the ARC: we would
MM> "cut
On Tue, 15 Apr 2008, Mark Maybee wrote:
> going to take 12sec to get this data onto the disk. This "impedance
> mis-match" is going to manifest as pauses: the application fills
> the pipe, then waits for the pipe to empty, then starts writing again.
> Note that this won't be smooth, since we need
ZFS has always done a certain amount of "write throttling". In the past
(or the present, for those of you running S10 or pre build 87 bits) this
throttling was controlled by a timer and the size of the ARC: we would
"cut" a transaction group every 5 seconds based off of our timer, and
we would als
Hello eric,
Thursday, March 27, 2008, 9:36:42 PM, you wrote:
ek> On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote:
>> On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:
>>>
>>> This causes the sync to happen much faster, but as you say,
>>> suboptimal.
>>> Haven't had the time to go through the bu
you may want to try disabling the disk write cache on the single disk.
also for the RAID disable 'host cache flush' if such an option exists. that
solved the problem for me.
let me know.
Bob Friesenhahn <[EMAIL PROTECTED]> wrote: On Thu, 27 Mar 2008, Neelakanth
Nadgir wrote:
>
> This causes t
On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote:
> On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:
>>
>> This causes the sync to happen much faster, but as you say,
>> suboptimal.
>> Haven't had the time to go through the bug report, but probably
>> CR 6429205 each zpool needs to monitor its th
On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:
>
> This causes the sync to happen much faster, but as you say, suboptimal.
> Haven't had the time to go through the bug report, but probably
> CR 6429205 each zpool needs to monitor its throughput
> and throttle heavy writers
> will help.
I hope that
Bob Friesenhahn wrote:
> On Wed, 26 Mar 2008, Neelakanth Nadgir wrote:
>> When you experience the pause at the application level,
>> do you see an increase in writes to disk? This might the
>> regular syncing of the transaction group to disk.
>
> If I use 'zpool iostat' with a one second interval
Selim Daoud wrote:
> the question is: does the "IO pausing" behaviour you noticed penalize
> your application?
> what are the consequences at the application level?
>
> for instance we have seen application doing some kind of data capture
> from external device (video for example) requiring a const
On Wed, 26 Mar 2008, Neelakanth Nadgir wrote:
> When you experience the pause at the application level,
> do you see an increase in writes to disk? This might the
> regular syncing of the transaction group to disk.
If I use 'zpool iostat' with a one second interval what I see is two
or three samp
the question is: does the "IO pausing" behaviour you noticed penalize
your application?
what are the consequences at the application level?
for instance we have seen application doing some kind of data capture
from external device (video for example) requiring a constant
throughput to disk (data f
Bob Friesenhahn wrote:
> My application processes thousands of files sequentially, reading
> input files, and outputting new files. I am using Solaris 10U4.
> While running the application in a verbose mode, I see that it runs
> very fast but pauses about every 7 seconds for a second or two.
My application processes thousands of files sequentially, reading
input files, and outputting new files. I am using Solaris 10U4.
While running the application in a verbose mode, I see that it runs
very fast but pauses about every 7 seconds for a second or two. This
is while reading 50MB/seco
19 matches
Mail list logo