Le 9 déc. 08 à 03:16, Brent Jones a écrit : > On Mon, Dec 8, 2008 at 3:09 PM, milosz <mew...@gmail.com> wrote: >> hi all, >> >> currently having trouble with sustained write performance with my >> setup... >> >> ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly >> connected to snv_101 w/ intel e1000g nic. >> >> basically, given enough time, the sustained write behavior is >> perfectly periodic. if i copy a large file to the iscsi target, >> iostat reports 10 seconds or so of -no- writes to disk, just small >> reads... then 2-3 seconds of disk-maxed writes, during which time >> windows reports the write performance dropping to zero (disk queues >> maxed). >>
This looks consistent with being limited by the network factors. Disks are idling while the next ZFS transaction group is being formed. What is less clear is why windows write performance drops to zero. One possible explanation is that during, the write bursts the small reads are being starved preventing progress on the Initiator side. -r >> so iostat will report something like this for each of my zpool >> disks (with iostat -xtc 1) >> >> 1s: %b 0 >> 2s: %b 0 >> 3s: %b 0 >> 4s: %b 0 >> 5s: %b 0 >> 6s: %b 0 >> 7s: %b 0 >> 8s: %b 0 >> 9s: %b 0 >> 10s: %b 0 >> 11s: %b 100 >> 12s: %b 100 >> 13s: %b 100 >> 14s: %b 0 >> 15s: %b 0 >> >> it looks like solaris hangs out caching the writes and not actually >> committing them to disk... when the cache gets flushed, the >> iscsitgt (or whatever) just stops accepting writes. >> >> this is happening across controllers and zpools. also, a test copy >> of a 10gb file from one zpool to another (not iscsi) yielded >> similar iostat results: 10 seconds of big reads from the source >> zpool, 2-3 seconds of big writes to the target zpool (target zpool >> is 5x bigger than source zpool). >> >> anyone got any ideas? point me in the right direction? >> >> thanks, >> >> milosz >> -- >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss@opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > Are you running at compression? I see this behavior with heavy loads, > and GZIP compression enabled. > What does 'zfs get compression' say? > > -- > Brent Jones > br...@servuhome.net > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss