You might want to also try toggling the Nagle tcp setting to see if that helps
with your workload:
ndd -get /dev/tcp tcp_naglim_def
(save that value, default is 4095)
ndd -set /dev/tcp tcp_naglim_def 1
If no (or a negative) difference, set it back to the original value
ndd -set /dev/tcp tcp_nagl
gm_sjo wrote:
> 2008/9/30 Jean Dion <[EMAIL PROTECTED]>:
>
>> If you want performance you do not put all your I/O across the same physical
>> wire. Once again you cannot go faster than the physical wire can support
>> (CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on
>> si
2008/9/30 Jean Dion <[EMAIL PROTECTED]>:
> If you want performance you do not put all your I/O across the same physical
> wire. Once again you cannot go faster than the physical wire can support
> (CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on
> single port you "share" the
On Mon, Sep 29, 2008 at 06:01:18PM -0700, Jean Dion wrote:
>
> Legato client and server contains tuning parameters to avoid such small file
> problems. Check your Legato buffer parameters. These buffer will use your
> server memory as disk cache.
Our backup person tells me that there are no
On Tue, Sep 30, 2008 at 10:32:50AM -0700, William D. Hathaway wrote:
> Gary -
>Besides the network questions...
Yes, I suppose I should see if traffic on the Iscsi network is
hitting a limit of some sort.
>What does your zpool status look like?
Pretty simple:
$ zpool status
pool:
Normal iSCSI setup split network traffic at physical layer and not
logical layer. That mean physical ports and often physical PCI bridge
chip if you can. That will be fine for small traffic but we are
talking backup performance issues. IP network and number of small
files are very often the b
2008/9/30 Jean Dion <[EMAIL PROTECTED]>:
> Simple. You cannot go faster than the slowest link.
That is indeed correct, but what is the slowest link when using a
Layer 2 VLAN? You made a broad statement that iSCSI 'requires' a
dedicated, standalone network. I do not believe this is the case.
> Any
Gary -
Besides the network questions...
What does your zpool status look like?
Are you using compression on the file systems?
(Was single-threaded and fixed in s10u4 or equiv patches)
--
This message posted from opensolaris.org
___
zfs-disc
For Solaris internal debugging tools look here
http://opensolaris.org/os/community/advocacy/events/techdays/seattle/OS_SEA_POD_JMAURO.pdf;jsessionid=9B3E275EEB6F1A0E0BC191D8DEC0F965
ZFS specifics is available here
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Jean
On Mon, Sep 29, 2008 at 06:01:18PM -0700, Jean Dion wrote:
> Do you have dedicated iSCSI ports from your server to your NetApp?
Yes, it's a dedicated redundant gigabit network.
> iSCSI requires dedicated network and not a shared network or even VLAN.
> Backup cause large I/O that fill your ne
Simple. You cannot go faster than the slowest link.
Any VLAN share the bandwidth workload and do not provide a dedicated
bandwidth for each of them. That means if you have multiple VLAN
coming out of the same wire of your server you do not have "n" time the
bandwidth but only a fraction of i
2008/9/30 Jean Dion <[EMAIL PROTECTED]>:
> iSCSI requires dedicated network and not a shared network or even VLAN.
> Backup cause large I/O that fill your network quickly. Like ans SAN today.
Could you clarify why it is not suitable to use VLANs for iSCSI?
__
Do you have dedicated iSCSI ports from your server to your NetApp?
iSCSI requires dedicated network and not a shared network or even VLAN. Backup
cause large I/O that fill your network quickly. Like ans SAN today.
Backup are extremely demanding on hardware (CPU, Mem, I/O ports, disk etc).
We have a moderately sized Cyrus installation with 2 TB of storage
and a few thousand simultaneous IMAP sessions. When one of the
backup processes is running during the day, there's a noticable
slowdown in IMAP client performance. When I start my `mutt' mail
reader, it pauses for several seconds
14 matches
Mail list logo