On Wed, 2007-08-29 at 01:15 -0700, Martin Knoblauch wrote:
> > > Another thing I saw during my tests is that when writing to NFS, the
> > > "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
> > > or a bug?
> >
> > What are the nr_unstable numbers?
NFS has the concept of un
--- Jens Axboe <[EMAIL PROTECTED]> wrote:
>
> Try limiting the queue depth on the cciss device, some of those are
> notoriously bad at starving commands. Something like the below hack,
> see
> if it makes a difference (and please verify in dmesg that it prints
> the
> message about limiting dept
--- Robert Hancock <[EMAIL PROTECTED]> wrote:
>
> I saw a bulletin from HP recently that sugggested disabling the
> write-back cache on some Smart Array controllers as a workaround
> because
> it reduced performance in applications that did large bulk writes.
> Presumably they are planning on
--- Chuck Ebbert <[EMAIL PROTECTED]> wrote:
> On 08/28/2007 11:53 AM, Martin Knoblauch wrote:
> >
> > The basic setup is a dual x86_64 box with 8 GB of memory. The
> DL380
> > has a HW RAID5, made from 4x72GB disks and about 100 MB write
> cache.
> > The performance of the block device with O_D
On 08/28/2007 11:53 AM, Martin Knoblauch wrote:
>
> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
> The performance of the block device with O_DIRECT is about 90 MB/sec.
>
> The problematic behaviour co
Jens Axboe wrote:
On Tue, Aug 28 2007, Martin Knoblauch wrote:
Keywords: I/O, bdi-v9, cfs
Hi,
a while ago I asked a few questions on the Linux I/O behaviour,
because I were (still am) fighting some "misbehaviour" related to heavy
I/O.
The basic setup is a dual x86_64 box with 8 GB of memory
--- Jens Axboe <[EMAIL PROTECTED]> wrote:
> On Tue, Aug 28 2007, Martin Knoblauch wrote:
> > Keywords: I/O, bdi-v9, cfs
> >
>
> Try limiting the queue depth on the cciss device, some of those are
> notoriously bad at starving commands. Something like the below hack,
> see
> if it makes a differ
On Tue, Aug 28 2007, Martin Knoblauch wrote:
> Keywords: I/O, bdi-v9, cfs
>
> Hi,
>
> a while ago I asked a few questions on the Linux I/O behaviour,
> because I were (still am) fighting some "misbehaviour" related to heavy
> I/O.
>
> The basic setup is a dual x86_64 box with 8 GB of memory. T
--- Fengguang Wu <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 29, 2007 at 01:15:45AM -0700, Martin Knoblauch wrote:
> >
> > --- Fengguang Wu <[EMAIL PROTECTED]> wrote:
> >
> > > You are apparently running into the sluggish kupdate-style
> writeback
> > > problem with large files: huge amount of dir
On Wed, Aug 29, 2007 at 01:15:45AM -0700, Martin Knoblauch wrote:
>
> --- Fengguang Wu <[EMAIL PROTECTED]> wrote:
>
> > You are apparently running into the sluggish kupdate-style writeback
> > problem with large files: huge amount of dirty pages are getting
> > accumulated and flushed to the disk
--- Fengguang Wu <[EMAIL PROTECTED]> wrote:
> On Tue, Aug 28, 2007 at 08:53:07AM -0700, Martin Knoblauch wrote:
> [...]
> > The basic setup is a dual x86_64 box with 8 GB of memory. The
> DL380
> > has a HW RAID5, made from 4x72GB disks and about 100 MB write
> cache.
> > The performance of the
On Tue, Aug 28, 2007 at 08:53:07AM -0700, Martin Knoblauch wrote:
[...]
> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
> The performance of the block device with O_DIRECT is about 90 MB/sec.
>
> The pro
Keywords: I/O, bdi-v9, cfs
Hi,
a while ago I asked a few questions on the Linux I/O behaviour,
because I were (still am) fighting some "misbehaviour" related to heavy
I/O.
The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
has a HW RAID5, made from 4x72GB disks and about 100 M
--- Jesper Juhl <[EMAIL PROTECTED]> wrote:
> On 05/07/07, Jesper Juhl <[EMAIL PROTECTED]> wrote:
> > On 05/07/07, Martin Knoblauch <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > >
> >
> > I'd suspect you can't get both at 100%.
> >
> > I'd guess you are probably using a 100Hz no-preempt kernel. Have
>
On 05/07/07, Jesper Juhl <[EMAIL PROTECTED]> wrote:
On 05/07/07, Martin Knoblauch <[EMAIL PROTECTED]> wrote:
> Hi,
>
> for a customer we are operating a rackful of HP/DL380/G4 boxes that
> have given us some problems with system responsiveness under [I/O
> triggered] system load.
>
> The system
> I am just now playing with dirty_ratio. Anybody knows what the lower
> limit is? "0" seems acceptabel, but does it actually imply "write out
> immediatelly"?
You should "watch -n 1 cat /proc/meminfo" and monitor the Dirty and Writeback
while lowering the amount the kernel may keep dirty. The so
> On 5 Jul, 16:50, Martin Knoblauch <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > for a customer we are operating a rackful of HP/DL380/G4 boxes
> that
> > have given us some problems with system responsiveness under [I/O
> > triggered] system load.
> [snip]
>
> IIRC, the locking in the CCISS driver
--- Daniel J Blueman <[EMAIL PROTECTED]> wrote:
> On 5 Jul, 16:50, Martin Knoblauch <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > for a customer we are operating a rackful of HP/DL380/G4 boxes
> that
> > have given us some problems with system responsiveness under [I/O
> > triggered] system load.
>
On 5 Jul, 16:50, Martin Knoblauch <[EMAIL PROTECTED]> wrote:
Hi,
for a customer we are operating a rackful of HP/DL380/G4 boxes that
have given us some problems with system responsiveness under [I/O
triggered] system load.
[snip]
IIRC, the locking in the CCISS driver was pretty heavy until la
Brice Figureau wrote:
>> CFQ gives less (about 10-15%) throughput except for the kernel
>> with the
>> cfs cpu scheduler, where CFQ is on par with the other IO
>> schedulers.
>>
>
>Please have a look to kernel bug #7372:
>http://bugzilla.kernel.org/show_bug.cgi?id=7372
>
>It seems I encountered th
Martin Knoblauch knobisoft.de> writes:
> --- Jesper Juhl gmail.com> wrote:
>
> > On 06/07/07, Robert Hancock shaw.ca> wrote:
> > [snip]
> > >
> > > Try playing with reducing /proc/sys/vm/dirty_ratio and see how that
> > > helps. This workload will fill up memory with dirty data very
> > quickl
Martin Knoblauch wrote:
>--- Robert Hancock <[EMAIL PROTECTED]> wrote:
>
>>
>> Try playing with reducing /proc/sys/vm/dirty_ratio and see how that
>> helps. This workload will fill up memory with dirty data very
>> quickly,
>> and it seems like system responsiveness often goes down the toilet
>> wh
>>b) any ideas how to optimize the settings of the /proc/sys/vm/
>>parameters? The documentation is a bit thin here.
>>
>>
>I cant offer any advice there, but is raid-5 really the best choice
>for your needs? I would not choose raid-5 for a system that is
>regularly performing lots of large
--- Robert Hancock <[EMAIL PROTECTED]> wrote:
>
> Try playing with reducing /proc/sys/vm/dirty_ratio and see how that
> helps. This workload will fill up memory with dirty data very
> quickly,
> and it seems like system responsiveness often goes down the toilet
> when
> this happens and the s
--- Jesper Juhl <[EMAIL PROTECTED]> wrote:
> On 06/07/07, Robert Hancock <[EMAIL PROTECTED]> wrote:
> [snip]
> >
> > Try playing with reducing /proc/sys/vm/dirty_ratio and see how that
> > helps. This workload will fill up memory with dirty data very
> quickly,
> > and it seems like system respon
On 06/07/07, Robert Hancock <[EMAIL PROTECTED]> wrote:
[snip]
Try playing with reducing /proc/sys/vm/dirty_ratio and see how that
helps. This workload will fill up memory with dirty data very quickly,
and it seems like system responsiveness often goes down the toilet when
this happens and the sy
Martin Knoblauch wrote:
Hi,
for a customer we are operating a rackful of HP/DL380/G4 boxes that
have given us some problems with system responsiveness under [I/O
triggered] system load.
The systems in question have the following HW:
2x Intel/EM64T CPUs
8GB memory
CCISS Raid controller with 4
On 05/07/07, Martin Knoblauch <[EMAIL PROTECTED]> wrote:
Hi,
for a customer we are operating a rackful of HP/DL380/G4 boxes that
have given us some problems with system responsiveness under [I/O
triggered] system load.
The systems in question have the following HW:
2x Intel/EM64T CPUs
8GB me
On 7/5/07, Martin Knoblauch <[EMAIL PROTECTED]> wrote:
Hi,
for a customer we are operating a rackful of HP/DL380/G4 boxes that
have given us some problems with system responsiveness under [I/O
triggered] system load.
The systems in question have the following HW:
2x Intel/EM64T CPUs
8GB memo
Hi,
for a customer we are operating a rackful of HP/DL380/G4 boxes that
have given us some problems with system responsiveness under [I/O
triggered] system load.
The systems in question have the following HW:
2x Intel/EM64T CPUs
8GB memory
CCISS Raid controller with 4x72GB SCSI disks as RAID5
30 matches
Mail list logo