On 4/29/11 5:34 PM, Kate wrote:
> I have recently accepted a job that has a development lab with a lot of Sun
> equipment, fiber infrastructure, and more. Consequently I have worked with
> neither very much before. Over the last few weeks I have been trying to find
> the right answer for this si
I need some help figuring out what I'm looking for, so bear with me.
For a lot of distributed applications (web apps, distributed databases, etc) I
have a need for a tool that can both execute benchmarks as well as collect data
from the nodes under load, such as CPU, memory, disk ops and other p
Is iGen available? I see it referenced especially in the Mr.Benchmark
blog but can't find any place to get it. Anyone have the scoop?
benr.
___
perf-discuss mailing list
perf-discuss@opensolaris.org
Sean Liu wrote:
> Well, if we lost the clue of freemem 30 years ago, where the heck did the
> vmstat freemem Solaris 8 - 10 w/o ZFS come from??? Out of nowhere?
>
> And subtract zfs:arcstats:c_min from what?
>
> If I am asking the wrong question, please give me the right question.
>
I try to e
m...@bruningsystems.com wrote:
> Hi Jim,
> Jim Mauro wrote:
>>
>> mdb's memstat is cool in how it summarizes things, but it takes a very
>> long time to run on large systems. memstat is walking page lists, so
>> it should be quite accurate.
>> If you can live with the run time of ::memstat, it's cu
Jim Mauro wrote:
> Hi Ben - The difficulty in getting accurate memory accounting has to
> do with shared memory pages. Every process that has a shared page
> mapped to its address space gets charged (in terms of RSS measurements).
>
> I tend to use kstats (kstat -n system_pages) to get an idea of h
rickey c weisner wrote:
> On Sat, Mar 28, 2009 at 03:31:59PM -0700, Ben Rockwood wrote:
>
>> unix:0:system_pages:pagesfree 7954235 <--- 31,816,940 (31071 MB)
>>
> Free (cachelist) + Free (freelist) = 30992 + 73 = 31065 MB
>
>> unix:
m...@bruningsystems.com wrote:
> Hi Ben,
> Ben Rockwood wrote:
>> I'm curious as to why memory statistics seems to be very difficult to be
>> accurate about. If you use kstats, mdb ::memstat, and add up VSZ/RSS
>> from ps, you get numbers that are different, although
I'm curious as to why memory statistics seems to be very difficult to be
accurate about. If you use kstats, mdb ::memstat, and add up VSZ/RSS
from ps, you get numbers that are different, although close.
Can anyone shed some light on why this is? I'm assumed that ::memstat
is the most accurate me
I agree with Jim, we need some numbers to help. I would recommend also
looking not just at 'iostat' but also 'fsstat' to get a better idea of
what the IO load is like on an op basis.
Some questions and suggestions come to mind:
1) Have you disabled atime on the dataset(s)? (zfs set atime=off po
Sean Liu wrote:
> This could be the ZFS cache depending on which sol 10 update you are using.
> Try mdb / memstat to have a detailed breakdown.
> mdb -k
> ::memstat
>
Agreed, if your using ZFS thats most certainly ZFS ARC. Try this:
http://cuddletech.com/arc_summary
benr.
___
Peter Tribble wrote:
> On Wed, Feb 4, 2009 at 8:22 AM, adrian cockcroft
> wrote:
>
>> Don't write yet another performance stats collector / plotter, its been done
>> to death.
>>
>
> It may have been done to death; has it been done properly?
>
> I've been playing with most of the candidate
Octave Orgeron wrote:
> What I meant what the such things have been left to 3rd party tools and
> products by Sun. That should change. Monitoring and administrative tools are
> essential and should be robust out of the box.
>
I know what you meant, I was trying to be funny.
I think we're of
Octave Orgeron wrote:
> I think this would make a great project as monitoring for Solaris has pretty
> much been left to 3rd party tools by Sun. That's okay in corporate
> environments where people spend money on one tool for everything. However, if
> Solaris/OpenSolaris came with all the MIBs o
Jason King wrote:
> What I wanted to do initially was expose the data used by vmstat,
> mpstat, and fsstat -- I think the data being exposed is stable enough
> that we could do that. Possibly also include some ZFS arc data as
> well.
>
All of these are simple kstats. For fsstat data you will
Jason King wrote:
> Doing some more digging, it appears that the number of performance
> metrics that can be viewed via SNMP on OpenSolaris is minimal. I am
> proposing a project that will enhance the number of metrics available
> via SNMP. Since SNMP is fairly widespread, it allows one to avoid
n why tmpfs NFS shares would preform horribly? Or,
as I expected, should I be able to get a super-fast NFS share?
benr.
--
Ben Rockwood cuddletech.com
Joyent Inc. PGP: 0xC823A182 @ pgp.mit.edu
"...even at night his
Looks like some files are missing from the distribution. Is this what was
meant by: "Makefile cleanup needs to be done for stand-alone builds for
OpenSolaris, Linux, and OS/X, etc." ?
[uma:/tmp/filebench/filebench] root# make
make: Warning: Can't find `../Makefile.cmd': No such file or director
Is anyone using Xanadu external to SWAN? I've been playing around with it and
haven't been able to get it going. Even hacking out the SWAN URL's there are
thing missing like "xanadu/Query" and "xanadu/DBLoader".
Any hints are appreciated. I'd really like to use it to create some pretty
gra
I've been using DTrace more and more to investigate storage performance issues
and keep bumping into something I can't properly explain. The following
snipped comes from Brenden's 'iotop':
...
102 13171 11536 pickup sd6 31 384 R 2364416
80 14597 10632 httpd
I was really hoping for some option other than ZIL_DISABLE, but finally gave up
the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at
least not enough to matter.
ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up
to 48 or so soonish (I BFU'd
The disk layout is RAIDZ2. The issue isn't present on ZFS local. The problem
is an NFS tuning issue. It is possible that there is a ZFS-NFS issue, but I
find that somewhat unlikely although I can't rule it out.
When I watch NFSD thread count during access I notice that only 5 threads open
du
Hey Guys,
I'm trying to tune my NFS enviroment and have yet to make any improvement, I
was hoping someone could offer some experience in this situation.
Here's the problem: I've got a bunch of X4100 clients (NV_B43) and a Thumper
(NV_B43) NFS server using ZFS for storage. I'm currently u
23 matches
Mail list logo