> I was just wandering that maybe there's a problem with just one
> disk...
No, this is something I have observed on at least four different systems, with
vastly varying hardware. Probably just the effects of the known problem.
Thanks,
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schu
Hello Peter,
Tuesday, December 18, 2007, 5:12:48 PM, you wrote:
>> Sequential writing problem with process throttling - there's an open
>> bug for it for quite a while. Try to lower txg_time to 1s - should
>> help a little bit.
PS> Yeah, my post was mostly to emphasize that on commodity hardware
> Sequential writing problem with process throttling - there's an open
> bug for it for quite a while. Try to lower txg_time to 1s - should
> help a little bit.
Yeah, my post was mostly to emphasize that on commodity hardware raidz2 does
not even come close to being a CPU bottleneck. It wasn't a
Frank Penczek writes:
> Hi,
>
> On Dec 17, 2007 4:18 PM, Roch - PAE <[EMAIL PROTECTED]> wrote:
> > >
> > > The pool holds home directories so small sequential writes to one
> > > large file present one of a few interesting use cases.
> >
> > Can you be more specific here ?
> >
> > Do
Hi,
On Dec 17, 2007 4:18 PM, Roch - PAE <[EMAIL PROTECTED]> wrote:
> >
> > The pool holds home directories so small sequential writes to one
> > large file present one of a few interesting use cases.
>
> Can you be more specific here ?
>
> Do you have a body of application that would do small
>
>> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
>> 0.0 48.00.0 3424.6 0.0 35.00.0 728.9 0 100 c2t8d0
> That service time is just terrible!
yea, that service time is unreasonable. almost a second for each
command? and 35 more commands queued? (reorder =
Frank Penczek writes:
> Hi,
>
> On Dec 17, 2007 10:37 AM, Roch - PAE <[EMAIL PROTECTED]> wrote:
> >
> >
> > dd uses a default block size of 512B. Does this map to your
> > expected usage ? When I quickly tested the CPU cost of small
> > read from cache, I did see that ZFS was more costly
Hi,
On Dec 17, 2007 10:37 AM, Roch - PAE <[EMAIL PROTECTED]> wrote:
>
>
> dd uses a default block size of 512B. Does this map to your
> expected usage ? When I quickly tested the CPU cost of small
> read from cache, I did see that ZFS was more costly than UFS
> up to a crossover between 8K and 16
dd uses a default block size of 512B. Does this map to your
expected usage ? When I quickly tested the CPU cost of small
read from cache, I did see that ZFS was more costly than UFS
up to a crossover between 8K and 16K. We might need a more
comprehensive study of that (data in/out of cache, di
Robert Milkowski wrote:
> Hello James,
>
> Sunday, December 16, 2007, 9:54:18 PM, you wrote:
>
> JCM> hi Frank,
>
> JCM> there is an interesting pattern here (at least, to my
> JCM> untrained eyes) - your %b starts off quite low:
> JCM> All of which, to me, look like you're filling a buffer
Hello James,
Sunday, December 16, 2007, 9:54:18 PM, you wrote:
JCM> hi Frank,
JCM> there is an interesting pattern here (at least, to my
JCM> untrained eyes) - your %b starts off quite low:
JCM> Frank Penczek wrote:
JCM>
>> ---
>> dd'ing to NFS mount:
>> [EMAIL PROTECTED]://tmp> dd if=./f
hi Frank,
there is an interesting pattern here (at least, to my
untrained eyes) - your %b starts off quite low:
Frank Penczek wrote:
> ---
> dd'ing to NFS mount:
> [EMAIL PROTECTED]://tmp> dd if=./file.tmp of=/home/fpz/file.tmp
> 20+0 records in
> 20+0 records out
> 10240 bytes
> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
> 0.0 48.00.0 3424.6 0.0 35.00.0 728.9 0 100 c2t8d0
> 0.0 60.00.0 4280.8 0.0 35.00.0 583.1 0 100 c2t9d0
> 0.0 55.00.0 3938.2 0.0 35.00.0 636.1 0 100 c2t10d0
> 0.0 56.0
Hi,
On Dec 14, 2007 8:24 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Frank Penczek wrote:
> >
> > The performance is slightly disappointing. Does anyone have
> > a similar setup and can anyone share some figures?
> > Any pointers to possible improvements are greatly appreciated.
> >
> >
>
> Us
Hi,
sorry for the lengthy post ...
On Dec 15, 2007 1:56 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
[...]
> Sequential writing problem with process throttling - there's an open
> bug for it for quite a while. Try to lower txg_time to 1s - should
> help a little bit.
Since setting txg_time to
Hi,
On Dec 14, 2007 7:50 PM, Louwtjie Burger <[EMAIL PROTECTED]> wrote:
[...]
> I would have said ... to be expected, since the 280 came with a
> 100Mbit interface. So a 9-12 MB/s peak would be acceptable. You did
> mention a "gigabit switch"... did you install a gigabit HBA ? If
> that's the case
Hello Peter,
Saturday, December 15, 2007, 7:45:50 AM, you wrote:
>> Use a faster processor or change to a mirrored configuration.
>> raidz2 can become processor bound in the Reed-Soloman calculations
>> for the 2nd parity set. You should be able to see this in mpstat, and to
>> a coarser grain i
> Use a faster processor or change to a mirrored configuration.
> raidz2 can become processor bound in the Reed-Soloman calculations
> for the 2nd parity set. You should be able to see this in mpstat, and to
> a coarser grain in vmstat.
Hmm. Is the OP's hardware *that* slow? (I don't know enough
Frank Penczek wrote:
>
> The performance is slightly disappointing. Does anyone have
> a similar setup and can anyone share some figures?
> Any pointers to possible improvements are greatly appreciated.
>
>
Use a faster processor or change to a mirrored configuration.
raidz2 can become processo
> The throughput when writing from a local disk to the
> zpool is around 30MB/s, when writing from a client
Err.. sorry, the internal storage would be good old 1Gbit FCAL disks @
10K rpm. Still, not the fastest around ;)
___
zfs-discuss mailing list
zfs-
Hi all,
we are using the following setup as file server:
---
# uname -a
SunOS troubadix 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-280R
# prtconf -D
System Configuration: Sun Microsystems sun4u
Memory size: 2048 Megabytes
System Peripherals (Software Nodes):
SUNW,Sun-Fire-280R (driver n
21 matches
Mail list logo