Hmmm, looks like it might be realtek lossage. Crystaldiskmark just finished
the read phase. Getting about 56MB/sec, which isn't tremendous, but it
beats the snot out of the 33 or so the RT was generating. I then re-ran the
iSCSI crystaldiskmark test, and got about the same amount! e.g. cifs is
Yes, the centos box is a mini-atom with onboard realtek. The win7 I was
using the onboard realtek too.
-Original Message-
From: Gregory Youngblood [mailto:greg...@youngblood.me]
Sent: Thursday, April 21, 2011 11:42 PM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss
Did both the CentOS and Windows boxes have realtek cards? I don't know if I
missed that detail earlier. I don't know that it will make much of a
difference, though in the past I do know that realtek cards had "issues" and
sometimes wouldn't perform very well. These days I pretty much stick with
An intel pro/1000, unfortunately, it'll be a pain to try it. The centos box
has no pci slots. I'll have to pull my win7 box open and try it there.
Stay tuned...
-Original Message-
From: Gregory Youngblood [mailto:greg...@youngblood.me]
Sent: Thursday, April 21, 2011 11:05 PM
To: Discus
On Apr 21, 2011, at 5:55 PM, Dan Swartzendruber wrote:
>
> Oh, good point. Not sure what is going on then. My win7-64 box has realtek
> NIC, and perf is fine with everything but CIFS. The centos box also has
> realtek, IIRC. Odd...
>
Do you have a non realtek nic you can try?
___
On Thu, Apr 21 at 17:56, Gary Driggs wrote:
On Apr 21, 2011, at 4:23 PM, "Eric D. Mudama" wrote:
Except I'm not running them all the way to the client, and my
networking gear is cheap. The only jumbo link is between my server
and my first switch. If frame size had a large effect, I'd expect m
Oh, good point. Not sure what is going on then. My win7-64 box has realtek
NIC, and perf is fine with everything but CIFS. The centos box also has
realtek, IIRC. Odd...
-Original Message-
From: Eric D. Mudama [mailto:edmud...@bounceswoosh.org]
Sent: Thursday, April 21, 2011 7:23 PM
T
On Apr 21, 2011, at 4:23 PM, "Eric D. Mudama" wrote:
> Except I'm not running them all the way to the client, and my
> networking gear is cheap. The only jumbo link is between my server
> and my first switch. If frame size had a large effect, I'd expect my
> MTU1500 clients to show a performance
On Thu, Apr 21 at 18:59, Dan Swartzendruber wrote:
Well, there is a difference right there - you are running jumbo frames; I am
not.
Except I'm not running them all the way to the client, and my
networking gear is cheap. The only jumbo link is between my server
and my first switch. If frame
Well, there is a difference right there - you are running jumbo frames; I am
not.
-Original Message-
From: Eric D. Mudama [mailto:edmud...@bounceswoosh.org]
Sent: Thursday, April 21, 2011 5:45 PM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] CIFS slow reads but f
On Apr 21, 2011, at 5:44 PM, Eric D. Mudama wrote:
> On Thu, Apr 21 at 13:53, Dan Swartzendruber wrote:
>> Gary wrote:
>>> I can't speak to this issue in regards to OpenIndiana but CIFS/samba
>>> has historically been much slower than NFS, FTP, and even netatalk,
>>> etc. due to its large metadat
the basic math behind the scenes is following (and not entirely determined):
1. DTT data is kept in metadata part of ARC;
2. metadata default max is arc_c_max / 4.
note that you can rise that limit.
3. arc max is RAM - 1GB.
so, if you have 8GB of ram, your arc max is 7GB and max metadata is
On Thu, Apr 21 at 14:12, James Kohout wrote:
All,
Been running opensolaris 134 with a 9T RaidZ2 array as a backup server
in a production environment. Whenever I tried to turn the ZFS
deduplication I always had crashes and other issues, which I most likely
attributed to the know ZFS dedup bugs
On Thu, Apr 21 at 13:53, Dan Swartzendruber wrote:
Gary wrote:
I can't speak to this issue in regards to OpenIndiana but CIFS/samba
has historically been much slower than NFS, FTP, and even netatalk,
etc. due to its large metadata overhead. One can observe this in the
wild with a few well time t
All,
Been running opensolaris 134 with a 9T RaidZ2 array as a backup server
in a production environment. Whenever I tried to turn the ZFS
deduplication I always had crashes and other issues, which I most likely
attributed to the know ZFS dedup bugs in 134. Once I rebuild the pool
without
I had a bug filed with Sun on Opensolaris long ago (CR 6850837 , P2
utility/filesharing libshare enhancements to address performance and
scalability) and I thought I try OI to see if anything had improved
with recent builds.
I am trying to deploy around 6000 filesystems across 3 pools. Each
pool
Gary wrote:
I can't speak to this issue in regards to OpenIndiana but CIFS/samba
has historically been much slower than NFS, FTP, and even netatalk,
etc. due to its large metadata overhead. One can observe this in the
wild with a few well time tcpdumps. One thing that might be worth
investigating
I can't speak to this issue in regards to OpenIndiana but CIFS/samba
has historically been much slower than NFS, FTP, and even netatalk,
etc. due to its large metadata overhead. One can observe this in the
wild with a few well time tcpdumps. One thing that might be worth
investigating in this situa
> This is truly odd. I have replied several times with real info, and
> the
> posts do not get through, but I send a test one and it does :( Anyway,
> it
> seems to be protocol related, as nfs reads are about 70MB/sec, twice
> what
> either cifs or samba are doing...
Strange indeed - I've seen CIF
19 matches
Mail list logo