Thanks for the responses.

Guus, in this case, "Deny All" describes the Default Zone paramter.  It
enables (Allow) or disables (Deny) comunication among ports/devices that
are not defined in the active zone set or when there is no active zone set.

Edward, the server is connected via 1Gb ethernet... that's another story,
but yes, the top speed of roughly ~800Mbps has been accurately measured.
Even though it's not at theoretical maximum, I'd be happy getting anywhere
close to it at this point.  (I don't have control over our ethernet network
and have to work with what I've got at the moment).  Remember these are all
Windows servers, so dd is not, I think... an option.  The measured transfer
rate is reconciled between the network drivers on both servers (HP's nic
utility, proxy server to backup server), as well as the QLogic SanSurfer
tool on the server in question, monitoring the IOPs and bps at the hba.
The test has been pretty reliable, to stream an image backup of a server
that occupies 10GB on the array, from proxy server to backup server.

I've verified that all firmware and drivers are current.  Today, I started
tinkering with the transfer size at the hba on the proxy server, and
lowering it from the default 512K to 32K resulted in an instant increase in
performance of roughly 33%, and that gets me up to roughly 500Mbps, but
still nowhere near 800Mbps.

Cables... SFPs... different ports... not sure what else I can look at,
aside from swapping out QLogic FC switches in favor of Brocades, but that
would be the nuclear option, and I want DATA before I make that kind of
recommendation.  Unfortunately, I don't have strong data besides
performance results.


On Wed, Feb 5, 2014 at 8:09 AM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:

> > From: tech-boun...@lists.lopsa.org [mailto:tech-boun...@lists.lopsa.org]
> > On Behalf Of Michael Ryder
> >
> > Everything on there is running at 4Gb, and we're not pushing the limits
> of the
> > ...
> > another set of Brocade-based HP switches, could push almost 800Mbps
> > (basically the limit of the NIC).  But now on these QLogics, I can only
> get, on
> > average ~350Mbps.
>
> Since neither 800Mbps, nor 350Mbps is anywhere near the speed of your
> fabric (it's not even approaching the speed of 1GigE) I think you're
> experiencing a fault of measuring the speed, or else, the speed is limited
> elsewhere, such as the actual disk IO reading from the backup source.
>
> If you want to test it better, I'll suggest, create a new LUN which is
> much larger than all the ram in your system, and then "time dd if=/dev/zero
> bs=1024k | pv > /dev/newLUN" ...  And then "time dd if=/dev/newLUN bs=1024k
> | pv > /dev/null"
>
> Be aware, buffering and caching can dramatically introduce inaccuracies in
> your measurements.  So don't just read it for a few seconds and assume you
> have a result.  Let it run for at least several minutes, if not all the way
> to completion.
>
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to