-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Only some problems that come to my mind with this:
a) Hardware is sometimes changed underhand without telling the customer.
Even for server-level hardware. (Been there.)
b) Hardware recommendations would get stale quickly. What use is a
hardware spec
On May 4, 2007, at 12:11 PM, Josh Berkus wrote:
Sebastian,
Before inventing a hyper tool, we might consider to provide 3-5
example
szenarios for common hardware configurations. This consumes less time
and be discussed and defined in a couple of days. This is of
course not
the correct option
Mark Kirkwood schrieb:
> Josh Berkus wrote:
>> Sebastian,
>>
>>> Before inventing a hyper tool, we might consider to provide 3-5 example
>>> szenarios for common hardware configurations. This consumes less time
>>> and be discussed and defined in a couple of days. This is of course not
>>> the corr
On Fri, May 04, 2007 at 09:07:53PM -0400, Greg Smith wrote:
> As if I don't know what the bogo stands for, ha! I brought that up
> because someone suggested testing CPU speed using some sort of idle loop.
> That's exactly what bogomips does.
Just for reference (I'm sure you know, but others mig
Josh Berkus wrote:
Sebastian,
Before inventing a hyper tool, we might consider to provide 3-5 example
szenarios for common hardware configurations. This consumes less time
and be discussed and defined in a couple of days. This is of course not
the correct option for a brandnew 20 spindle Sata 1
On Fri, 4 May 2007, Michael Stone wrote:
P4 2.4GHz 107ms
Xeon 3GHz 100ms
Opteron 275 65ms
Athlon X2 4600 61ms
PIII 1GHz 265ms
Opteron 250 39ms
something seems inconsistent here.
I don't see what you mean. The PIII results are exactly what I'd expect,
and I wouldn'
Josh Berkus schrieb:
> Sebastian,
>
>
>> Before inventing a hyper tool, we might consider to provide 3-5 example
>> szenarios for common hardware configurations. This consumes less time
>> and be discussed and defined in a couple of days. This is of course not
>> the correct option for a brandn
Sebastian,
> Before inventing a hyper tool, we might consider to provide 3-5 example
> szenarios for common hardware configurations. This consumes less time
> and be discussed and defined in a couple of days. This is of course not
> the correct option for a brandnew 20 spindle Sata 10.000 Raid 10
[EMAIL PROTECTED] schrieb:
> On Tue, 1 May 2007, Carlos Moreno wrote:
>
>>> large problem from a slog perspective; there is no standard way even
>>> within Linux to describe CPUs, for example. Collecting available disk
>>> space information is even worse. So I'd like some help on this
>>> port
On Fri, May 04, 2007 at 12:33:29AM -0400, Greg Smith wrote:
-bash-3.00$ psql
postgres=# \timing
Timing is on.
postgres=# select count(*) from generate_series(1,10,1);
count
10
(1 row)
Time: 106.535 ms
There you go, a completely cross-platform answer. You should run the
stat
On Thu, 3 May 2007, Josh Berkus wrote:
So any attempt to determine "how fast" a CPU is, even on a 1-5 scale,
requires matching against a database of regexes which would have to be
kept updated.
This comment, along with the subsequent commentary today going far astray
into CPU measurement lan
On Thu, 3 May 2007, Carlos Moreno wrote:
> error like this or even a hundred times this!! Most of the time
> you wouldn't, and definitely if the user is careful it would not
> happen --- but it *could* happen!!! (and when I say could, I
> really mean: trust me, I have actually seen it
That would be a valid argument if the extra precision came at a
considerable cost (well, or at whatever cost, considerable or not).
the cost I am seeing is the cost of portability (getting similarly
accruate info from all the different operating systems)
Fair enough --- as I mentioned, I w
On Thu, 3 May 2007, Carlos Moreno wrote:
I don't think it's that hard to get system time to a reasonable level (if
this config tuner needs to run for a min or two to generate numbers that's
acceptable, it's only run once)
but I don't think that the results are really that critical.
Still
I don't think it's that hard to get system time to a reasonable level
(if this config tuner needs to run for a min or two to generate
numbers that's acceptable, it's only run once)
but I don't think that the results are really that critical.
Still --- this does not provide a valid argument
On Thu, 3 May 2007, Carlos Moreno wrote:
> been just being naive) --- I can't remember the exact name, but I
> remember
> using (on some Linux flavor) an API call that fills a struct with data
> on the
> resource usage for the process, including CPU time; I assume measured
> with precis
been just being naive) --- I can't remember the exact name, but I
remember
using (on some Linux flavor) an API call that fills a struct with
data on the
resource usage for the process, including CPU time; I assume measured
with precision (that is, immune to issues of other applications runni
On Thu, 3 May 2007, Carlos Moreno wrote:
> CPUs, 32/64bit, or clock speeds. So any attempt to determine "how
> fast"
> a CPU is, even on a 1-5 scale, requires matching against a database of
> regexes which would have to be kept updated.
>
> And let's not even get started on Windows.
CPUs, 32/64bit, or clock speeds. So any attempt to determine "how
fast"
a CPU is, even on a 1-5 scale, requires matching against a database of
regexes which would have to be kept updated.
And let's not even get started on Windows.
I think the only sane way to try and find the cpu speed is
On Thu, 3 May 2007, Josh Berkus wrote:
Greg,
I'm not fooled--secretly you and your co-workers laugh at how easy this
is on Solaris and are perfectly happy with how difficult it is on Linux,
right?
Don't I wish. There's issues with getting CPU info on Solaris, too, if you
get off of Sun Hard
Greg,
> I'm not fooled--secretly you and your co-workers laugh at how easy this
> is on Solaris and are perfectly happy with how difficult it is on Linux,
> right?
Don't I wish. There's issues with getting CPU info on Solaris, too, if you
get off of Sun Hardware to generic white boxes. The bas
The more I think about this thread, the more I'm convinced of 2 things:
1= Suggesting initial config values is a fundamentally different
exercise than tuning a running DBMS.
This can be handled reasonably well by HW and OS snooping. OTOH,
detailed fine tuning of a running DBMS does not appear
On Tue, 1 May 2007, Greg Smith wrote:
On Tue, 1 May 2007, Josh Berkus wrote:
there is no standard way even within Linux to describe CPUs, for example.
Collecting available disk space information is even worse. So I'd like
some help on this portion.
what type of description of the CPU's a
On Tue, 1 May 2007, Josh Berkus wrote:
there is no standard way even within Linux to describe CPUs, for
example. Collecting available disk space information is even worse. So
I'd like some help on this portion.
I'm not fooled--secretly you and your co-workers laugh at how easy this is
on So
On Mon, 30 Apr 2007, Kevin Hunter wrote:
I recognize that PostgreSQL and MySQL try to address different
problem-areas, but is this one reason why a lot of people with whom I
talk prefer MySQL? Because PostgreSQL is so "slooow" out of the box?
It doesn't help, but there are many other differen
On Tue, 1 May 2007, Carlos Moreno wrote:
large problem from a slog perspective; there is no standard way even
within Linux to describe CPUs, for example. Collecting available disk
space information is even worse. So I'd like some help on this portion.
Quite likely, naiveness follows...
large problem from a slog perspective; there is no standard way even within
Linux to describe CPUs, for example. Collecting available disk space
information is even worse. So I'd like some help on this portion.
Quite likely, naiveness follows... But, aren't things like /proc/cpuinfo ,
/
Greg,
> 1) Collect up data about their system (memory, disk layout), find out a
> bit about their apps/workload, and generate a config file based on that.
We could start with this. Where I bogged down is that collecting system
information about several different operating systems ... and in som
At 12:18p -0400 on 30 Apr 2007, Craig A. James wrote:
1. Generating a resonable starting configuration for neophyte users
who have installed Postgres for the first time.
I recognize that PostgreSQL and MySQL try to address different
problem-areas, but is this one reason why a lot of people w
Greg Smith wrote:
If you're going to the trouble of building a tool for offering
configuration advice, it can be widly more effective if you look inside
the database after it's got data in it, and preferably after it's been
running under load for a while, and make your recommendations based on
On Fri, 27 Apr 2007, Josh Berkus wrote:
*Everyone* wants this. The problem is that it's very hard code to write
given the number of variables
There's lots of variables, and there are at least three major ways to work
on improving someone's system:
1) Collect up data about their system (mem
Harald Armin Massa wrote:
Carlos,
about your feature proposal: as I learned, nearly all
Perfomance.Configuration can be done by editing the .INI file and
making the Postmaster re-read it.
So, WHY at all should those parameters be guessed at the installation
of the database? Would'nt it be a
Jonah,
Um, shared_buffers is one of the most important initial parameters to
set and it most certainly cannot be set after startup.
Not after startup, correct. But after installation. It is possible to change
PostgreSQL.conf (not ini, to much windows on my side, sorry) and restart
postmaster.
On 4/28/07, Harald Armin Massa <[EMAIL PROTECTED]> wrote:
about your feature proposal: as I learned, nearly all
Perfomance.Configuration can be done by editing the .INI file and making the
Postmaster re-read it.
Um, shared_buffers is one of the most important initial parameters to
set and it mo
Carlos,
about your feature proposal: as I learned, nearly all
Perfomance.Configuration can be done by editing the .INI file and making the
Postmaster re-read it.
So, WHY at all should those parameters be guessed at the installation of the
database? Would'nt it be a saver point of time to have so
On Fri, 27 Apr 2007, Josh Berkus wrote:
Dan,
Yes, this is the classic problem. I'm not demanding anyone pick up the
ball and jump on this today, tomorrow, etc.. I just think it would be
good for those who *could* make a difference to keep those goals in mind
when they continue. If you have t
Dan,
> Yes, this is the classic problem. I'm not demanding anyone pick up the
> ball and jump on this today, tomorrow, etc.. I just think it would be
> good for those who *could* make a difference to keep those goals in mind
> when they continue. If you have the right mindset, this problem will
Bill Moran wrote:
In response to Dan Harris <[EMAIL PROTECTED]>:
Why does the user need to manually track max_fsm_pages and max_fsm_relations? I
bet there are many users who have never taken the time to understand what this
means and wondering why performance still stinks after vacuuming the
Bill,
> The only one that seems practical (to me) is random_page_cost. The
> others are all configuration options that I (as a DBA) want to be able
> to decide for myself.
Actually, random_page_cost *should* be a constant "4.0" or "3.5", which
represents the approximate ratio of seek/scan spee
On Fri, Apr 27, 2007 at 02:40:07PM -0400, Kevin Hunter wrote:
> out that many run multiple postmasters or have other uses for the
> machines in question), but perhaps it could send a message (email?)
> along the lines of "Hey, I'm currently doing this many of X
> transactions, against this mu
At 10:36a -0400 on 27 Apr 2007, Tom Lane wrote:
That's been proposed and rejected before, too; the main problem being
that initdb is frequently a layer or two down from the user (eg,
executed by initscripts that can't pass extra arguments through, even
assuming they're being invoked by hand in th
In response to Dan Harris <[EMAIL PROTECTED]>:
> Michael Stone wrote:
> > On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
> >> Notice that the second part of my suggestion covers this --- have
> >> additional
> >> switches to initdb
>
> > If the person knows all that, why wouldn't
Dan,
> Exactly.. What I think would be much more productive is to use the
> great amount of information that PG tracks internally and auto-tune the
> parameters based on it. For instance:
*Everyone* wants this. The problem is that it's very hard code to write
given the number of variables. I
Michael Stone wrote:
On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
Notice that the second part of my suggestion covers this --- have
additional
switches to initdb
If the person knows all that, why wouldn't they know to just change the
config parameters?
Exactly.. What I
On Apr 27, 2007, at 3:30 PM, Michael Stone wrote:
On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
Notice that the second part of my suggestion covers this --- have
additional
switches to initdb so that the user can tell it about estimates on
how the DB
will be used: estimated
On Fri, Apr 27, 2007 at 07:36:52AM -0700, Mark Lewis wrote:
Maybe he's looking for a switch for initdb that would make it
interactive and quiz you about your expected usage-- sort of a magic
auto-configurator wizard doohicky? I could see that sort of thing being
nice for the casual user or newbi
: vrijdag 27 april 2007 16:37
Aan: Carlos Moreno
CC: PostgreSQL Performance
Onderwerp: Re: [PERFORM] Feature Request --- was: PostgreSQL Performance
Tuning
Carlos Moreno <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> But
>> the fundamental problem remains that we don't know
Maybe he's looking for a switch for initdb that would make it
interactive and quiz you about your expected usage-- sort of a magic
auto-configurator wizard doohicky? I could see that sort of thing being
nice for the casual user or newbie who otherwise would have a horribly
mis-tuned database. The
Carlos Moreno <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> But
>> the fundamental problem remains that we don't know that much about
>> how the installation will be used.
> Notice that the second part of my suggestion covers this --- have
> additional switches to initdb
That's been proposed
On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
Notice that the second part of my suggestion covers this --- have
additional
switches to initdb so that the user can tell it about estimates on how
the DB
will be used: estimated size of the DB, estimated percentage of
activity tha
Tom Lane wrote:
Carlos Moreno <[EMAIL PROTECTED]> writes:
... But, wouldn't it make sense that the configure script
determines the amount of physical memory and perhaps even do a HD
speed estimate to set up defaults that are closer to a
performance-optimized
configuration?
No. Most
Carlos Moreno <[EMAIL PROTECTED]> writes:
> ... But, wouldn't it make sense that the configure script
> determines the amount of physical memory and perhaps even do a HD
> speed estimate to set up defaults that are closer to a
> performance-optimized
> configuration?
No. Most copies of Postgres
Steve Crawford wrote:
Have you changed _anything_ from the defaults? The defaults are set so
PG will run on as many installations as practical. They are not set for
performance - that is specific to your equipment, your data, and how you
need to handle the data.
Is this really the sensible thin
53 matches
Mail list logo