<internal and external aliases included>
Adrian C. and I wrote a book a while back (Capacity Planning for
Internet Services) that explains some simple M-value and
transactional capacity planning methods. There are several
other good ones, Neil Gunther's work for instance, or Menasce/
Almeida's (more heavy on the math side) book. Yeah, we really
should do a new edition to bring our book more up to date. :)
At the low end of the accuracy spectrum (what I call "good
enough" capacity planning), you can use sar, iostat, vmstat,
mpstat, and a bunch of other free tools to get a good view
of how your system is consuming resources (cpu, mem, network,
and disk), and then profile that into a new system model to
predict consumption. Staroffice spreadsheets can make a
pretty decent model for you. :)
At the high end are tools like Teamquest (http://www.teamquest.com)
and BMC perform/predict (http://www.bmc.com). These tools create
a *very* accurate consumption and capacity model by looking at
individual processes/threads, and can break out a "transaction"
as part of the workload profiling. There is a short section
toward the end of our book with screen shots of TQ and BMC at
work.
bill.
Mike Gerdts wrote:
On 2/23/06, *Atul Vidwansa* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Performance Experts,
We would like to know if there exists a capacity planning tool in
Solaris using which, we can predict performance/load characteristics of
a server. Given current load and CPU/Memory configuration of a
server(say V440), we would like to know how much load another server of
different type(say V6800) can sustain? Or how many CPUs/Memory we need
to add to current server to support a certain load?
Any help will be appreciated.
Regards,
-Atul
I almost advised you to go find your Sun sales rep... Instead, look
around inside Sun for the "M values" for the various servers. These are
commonly used by Sun's sales force and some customers to perform such
sizings. The interesting thing here is that the mvalues will even take
into account that Solaris 10 is faster than Solaris 9 which is faster
than Solaris 8 on the same hardware.
Essentially the m-value is a number that comes from a composite of
several benchmarks. Like any benchmark, it may not represent the
workload that you are running. However, in my experience the values
that Sun claims for the various systems translates rather reliably into
the relative performance of various systems.
I wish that Sun would disclose mvalues without NDA's. It would serve as
a great mechanism for assigning a relative CPU power value to each
server that could then be used by workload management software. FWIW,
in my shop I am rolling out resource controls based upon mvalues. I
take the mvalue of a particular V240 configuration and say that has 10
Zone Power Units (ZPUs). Then, as zones are assigned to a machine each
one goes into a resource pool that gets 1 share per ZPU. For example,
if a workload needs roughly half a V240, I would give it 5 shares
(advertised to application teams as ZPUs). I never put more than 10
shares on a V240. Then, if a workload moves from a V240 to a T2000, I
keep it at the number of ZPUs (shares) that it had on the V240.
However, the T2000 can handle a lot more ZPUs than the V240 and as such
will likely run a greater number of zones or larger zones.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
------------------------------------------------------------------------
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org
--
---
Bill Walker Geek at Large [EMAIL PROTECTED]
Principal Engineer 703.850.9527
http://www.thebunker.com
Sun Microsystems Federal http://blogs.sun.com/mrbill
I used to think that the brain was the most wonderful organ in my body.
Then I realized who was telling me this.
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org