sorry, that 60% statement was misleading... i will VERY OCCASIONALLY get a
spike to 60%, but i'm averaging more like 15%, with the throughput often
dropping to zero for several seconds at a time.
that iperf test more or less demonstrates it isn't a network problem, no?
also i have been using mi
milosz writes:
> iperf test coming out fine, actually...
>
> iperf -s -w 64k
>
> iperf -c -w 64k -t 900 -i 5
>
> [ ID] Interval Transfer Bandwidth
> [ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
>
> totally steady. i could probably implement some tweaks to improve it,
iperf test coming out fine, actually...
iperf -s -w 64k
iperf -c -w 64k -t 900 -i 5
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
totally steady. i could probably implement some tweaks to improve it, but if i
were getting a steady 77% of gigabi
Roch Bourbonnais wrote:
Le 4 janv. 09 à 21:09, milosz a écrit :
thanks for your responses, guys...
the nagle's tweak is the first thing i did, actually.
not sure what the network limiting factors could be here... there's
no switch, jumbo frames are on... maybe it's the e1000g driver?
it's bee
Le 4 janv. 09 à 21:09, milosz a écrit :
> thanks for your responses, guys...
>
> the nagle's tweak is the first thing i did, actually.
>
> not sure what the network limiting factors could be here... there's
> no switch, jumbo frames are on... maybe it's the e1000g driver?
> it's been wonky s
thanks for your responses, guys...
the nagle's tweak is the first thing i did, actually.
not sure what the network limiting factors could be here... there's no switch,
jumbo frames are on... maybe it's the e1000g driver? it's been wonky since 94
or so. even during the write bursts i'm only ge
> What is less clear is why windows write performance drops to zero.
Perhaps the tweak for Nagel's Algorithm in Windows would be in order?
http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect
--
This message posted from opensolaris.org
___
Le 9 déc. 08 à 03:16, Brent Jones a écrit :
> On Mon, Dec 8, 2008 at 3:09 PM, milosz wrote:
>> hi all,
>>
>> currently having trouble with sustained write performance with my
>> setup...
>>
>> ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly
>> connected to snv_101 w/ intel
my apologies... 11s, 12s, and 13s represent the number of seconds in a
read/write period, not disks. so, 11 seconds into a period, %b suddenly jumps
to 100 after having been 0 for the first 10.
--
This message posted from opensolaris.org
___
zfs-discu
On Mon, 8 Dec 2008, milosz wrote:
> compression is off across the board.
>
> svc_t is only maxed during the periods of heavy write activity (2-3
> seconds every 10 or so seconds)... otherwise disks are basically
> idling.
Check for some hardware anomaly which might impact disks 11, 12, and
13
compression is off across the board.
svc_t is only maxed during the periods of heavy write activity (2-3 seconds
every 10 or so seconds)... otherwise disks are basically idling.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
On Mon, Dec 8, 2008 at 3:09 PM, milosz <[EMAIL PROTECTED]> wrote:
> hi all,
>
> currently having trouble with sustained write performance with my setup...
>
> ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly connected
> to snv_101 w/ intel e1000g nic.
>
> basically, given enough
> (with iostat -xtc 1)
it sure would be nice to know if actv > 0 so
we would know if the lun was busy because
its queue is full or just slow (svc_t > 200)
for tracking errors try `iostat -xcen 1`
and `iostat -E`
Rob
___
zfs-d
hi all,
currently having trouble with sustained write performance with my setup...
ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly connected to
snv_101 w/ intel e1000g nic.
basically, given enough time, the sustained write behavior is perfectly
periodic. if i copy a large f
14 matches
Mail list logo