On Wed, 2007-08-29 at 01:15 -0700, Martin Knoblauch wrote:
> > > Another thing I saw during my tests is that when writing to NFS, the
> > > "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
> > > or a bug?
> >
> > What are the nr_unstable numbers?
NFS has the concept of un
--- Jens Axboe <[EMAIL PROTECTED]> wrote:
>
> Try limiting the queue depth on the cciss device, some of those are
> notoriously bad at starving commands. Something like the below hack,
> see
> if it makes a difference (and please verify in dmesg that it prints
> the
> message about limiting dept
--- Robert Hancock <[EMAIL PROTECTED]> wrote:
>
> I saw a bulletin from HP recently that sugggested disabling the
> write-back cache on some Smart Array controllers as a workaround
> because
> it reduced performance in applications that did large bulk writes.
> Presumably they are planning on
--- Chuck Ebbert <[EMAIL PROTECTED]> wrote:
> On 08/28/2007 11:53 AM, Martin Knoblauch wrote:
> >
> > The basic setup is a dual x86_64 box with 8 GB of memory. The
> DL380
> > has a HW RAID5, made from 4x72GB disks and about 100 MB write
> cache.
> > The performance of the block device with O_D
On 08/28/2007 11:53 AM, Martin Knoblauch wrote:
>
> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
> The performance of the block device with O_DIRECT is about 90 MB/sec.
>
> The problematic behaviour co
Jens Axboe wrote:
On Tue, Aug 28 2007, Martin Knoblauch wrote:
Keywords: I/O, bdi-v9, cfs
Hi,
a while ago I asked a few questions on the Linux I/O behaviour,
because I were (still am) fighting some "misbehaviour" related to heavy
I/O.
The basic setup is a dual x86_64 box with 8 GB of memory
--- Jens Axboe <[EMAIL PROTECTED]> wrote:
> On Tue, Aug 28 2007, Martin Knoblauch wrote:
> > Keywords: I/O, bdi-v9, cfs
> >
>
> Try limiting the queue depth on the cciss device, some of those are
> notoriously bad at starving commands. Something like the below hack,
> see
> if it makes a differ
On Tue, Aug 28 2007, Martin Knoblauch wrote:
> Keywords: I/O, bdi-v9, cfs
>
> Hi,
>
> a while ago I asked a few questions on the Linux I/O behaviour,
> because I were (still am) fighting some "misbehaviour" related to heavy
> I/O.
>
> The basic setup is a dual x86_64 box with 8 GB of memory. T
--- Fengguang Wu <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 29, 2007 at 01:15:45AM -0700, Martin Knoblauch wrote:
> >
> > --- Fengguang Wu <[EMAIL PROTECTED]> wrote:
> >
> > > You are apparently running into the sluggish kupdate-style
> writeback
> > > problem with large files: huge amount of dir
On Wed, Aug 29, 2007 at 01:15:45AM -0700, Martin Knoblauch wrote:
>
> --- Fengguang Wu <[EMAIL PROTECTED]> wrote:
>
> > You are apparently running into the sluggish kupdate-style writeback
> > problem with large files: huge amount of dirty pages are getting
> > accumulated and flushed to the disk
--- Fengguang Wu <[EMAIL PROTECTED]> wrote:
> On Tue, Aug 28, 2007 at 08:53:07AM -0700, Martin Knoblauch wrote:
> [...]
> > The basic setup is a dual x86_64 box with 8 GB of memory. The
> DL380
> > has a HW RAID5, made from 4x72GB disks and about 100 MB write
> cache.
> > The performance of the
On Tue, Aug 28, 2007 at 08:53:07AM -0700, Martin Knoblauch wrote:
[...]
> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
> The performance of the block device with O_DIRECT is about 90 MB/sec.
>
> The pro
Keywords: I/O, bdi-v9, cfs
Hi,
a while ago I asked a few questions on the Linux I/O behaviour,
because I were (still am) fighting some "misbehaviour" related to heavy
I/O.
The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
has a HW RAID5, made from 4x72GB disks and about 100 M
13 matches
Mail list logo