Derek Anderson wrote:
The 10 GbE is paired with a 10Gbe port on a VMware ESX 3.5 box
The 1GBe is through a 1GbE switch configured as a 4 port LACP group, which is used by a single port on the Vmware box. I tried GigE to a 4 port link aggregate and was not getting the results I hoped for so I bo
Derek Anderson wrote:
File copy 10Gbe to SSD -> 40M max
What exactly is on the other side of the 10GbE and 1gbE links? A VM
on an ESX server, or another Solaris box? What MTU is in use?
Drew
___
perf-discuss mailing list
perf-discuss@opensolaris.o
Steve Sistare wrote:
This has the same signature as CR:
6694625 Performance falls off the cliff with large IO sizes
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6694625
which was raised in the network forum thread "expensive pullupmsg in
kstrgetmsg()"
http://www.opensolaris
Elad Lahav wrote:
> On Linux, it is possible to determine which processors on a
> multiprocessor handle each interrupt (via
> /proc/irq/IRQ_NUM/smp_affinity). From my experience, this can greatly
> affect the performance of, e.g., web servers.
> Is there an equivalent mechanism on Solaris?
echo
Steve Sistare writes:
> This is a known issue on Solaris/x86. The issue is that over time,
> non-relocatable kernel memory is allocated at physical addresses
> throughout memory, fragmenting the PA space, and preventing the
> allocation of a physically contiguous large page. The only
> work
[EMAIL PROTECTED] writes:
>
> Do you know about the following compiler flags (of sun studio compilers)
>
> -xpagesize_heap= Controls the preferred page size for the heap,
> a={4K|2M|4M|default}
> -xpagesize_stack=Controls the preferred page size for the stack,
> a={4K|2M
Are there any tuning parameters which will increase the chances that
applications will be able to use large pages on amd64 anywhere near as
well as on sparc? Or is the x86/amd64 large page implementation just
not as mature?
I have noticed that using a fairly recent open solaris (actually
Nexenta
Paul Durrant writes:
> Just doing some network TX perf. measurement on a Dell 1850 dual Xeon
> box and I see that DMA mapping my buffers seems to be incredibly
> costly: each mapping taking >8us, some >16us.
I don't see anything this bad on an AMD based systems when running a
64-bit kernel. I
Shreyas writes:
> Any links will be of lot of help.
I maintain an out-of-tree driver for the Myri10GE nic.
Performance data is available from:
http://www.myri.com/scs/performance/Myri10GE/#solaris
Note that this version of the driver uses GLDv2 since out of tree
drivers are not allowed to use G
Michael Schulte writes:
> Martin wrote:
> > "If another major OS (Linux) is faster, it's a bug"
> > is the OpenSolaris performance slogan.
> >
> > In this benchmark Ubuntu is getting approx. 22 000 requests per second
> > compared to Nevada's 20 000.
> >
> > This should either be fixed
Ezhilan Narasimhan writes:
> Hmm, thats what Iam using too. But having issues getting it to run
> multithreaded. With the -P ## option, one or two of the threads go
> through, but the others fail with a connection refused. Not sure why.
>
I've never seen that behavior. I'm running:
'
Ezhilan writes:
> Hi,
>
> Iam doing similiar tests. What kind of load tool did u use to drive this.
>
Iperf is a good tool for this, as it is one of the few multi-threaded
network bandwidth measurment tools that I know of.
I also use multiple netperfs. Netperf is much less of a cpu hog
t
William D. Hathaway writes:
> Andrew Gallatin wrote:
> > William D. Hathaway writes:
> > > I would have thought this would be the preferred forum for your
> > question, but if nobody is biting you could also try the networking
> > discussion forum at: h
Jonathan Adams writes:
>
> You get the data from all CPUs; which CPU happens to run the END probe
> doesn't really matter. If you only want *data* from one CPU, you should
> do:
>
> profile:::profile-997
> /cpu == 10/
Ah, cool. Thanks!
Drew
___
William D. Hathaway writes:
> I would have thought this would be the preferred forum for your question,
> but if nobody is biting you could also try the networking discussion forum
> at: http://www.opensolaris.org/jive/forum.jspa?forumID=3
>
> I think if you added some details about what y
Is the correct list to discuss performance problems I'm seeing when
trying to get my company's PCI-e 10GbE nic to max out the link on an
ontario server like it does on an amd64?
Thanks in advance,
Drew
___
perf-discuss mailing list
perf-discuss@o
16 matches
Mail list logo