> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec
> %CPU
> 40 16341 99.1 20746 38.3 20307 52.7 14187 100.0 94033 98.2
> 9744.2 99.8
>
> v.s.
>
> Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
> -------Sequential Output-------- ---Sequential Input--
> --Random--
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
> --Seeks---
> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec
> %CPU
> 40 10952 66.9 11622 22.6 9564 30.1 14230 100.0 63604 100.0
> 6944.5 99.8
>
The near 100 CPU use during the seek test combined with very high numbers of
seeks per second indicated your test data size wasn't large enough and
you have the test data entirely cached in RAM, try bonnies -s 1024
(patience required)
Also when using UFS on striped disks of any kind beware of interactions
between stripe size and cluster size which can concentrate meta-data on
half or fewer of your disks.
Where sequential read/write performance is not critical you can stripe at
cluster size to avoid this. Other wise using an odd number of spindles for
a stripe and an even number for a RAID3 or RAID5 or stripeing at an interval
which is not a power of two should work (12,24,48,76 etc)
The problem arise from UFS's use of cylinder groups that are intended to put
inodes on a single disk near the blocks that they reference. (THis is
explained far better than I can in section 8.2 of the Design and Implementation
of the 4.4BSD operating system)
--
GeoffB
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message