On Monday 11 April 2005 13:45, Nick Piggin wrote:
>
> No luck yet (on SMP i386). How many disks are you using in each
> raid1 array? You are using one array for swap, and one mounted as
> ext3 for the working area of the `stress` program, right?
>
   Right. I'm using two Seagate ATA133 disks (ide controler is AMD-8111) each 
with 4 partitions, so I get 4 md Raid1 devices. The first one, md0, is for 
swap. The rest are

~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md1              4.6G  1.9G  2.6G  42% /
tmpfs                1005M     0 1005M   0% /dev/shm
/dev/md3               32G  107M   30G   1% /home
/dev/md2               31G  149M   29G   1% /var

  In these tests, /home on md3 is the working area for stress.

  The io scheduler used is the anticipatory. 

> Neil, have you had a look at the traces? Do they mean much to you?
>
> Claudio - I have attached another patch you could try. It has a more
> complete set of mempool and related memory allocation fixes, as well
> as some other recent patches I had which reduces atomic memory usage
> by the block layer. Could you try if you get time? Thanks.

  OK, I'll try them in a few minutes and report back.
 
  I'm curious as whether increasing the vm.min_free_kbytes sysctl value would 
help or not in this case. But I guess it wouldn't since there is already some 
free memory and also the alloc failures are order 0, right?

 Thanks

Claudio

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to