Justin Clift wrote:
> Hi all,
>
> As an end result of all this, do we now have a decent utility by which
> end user admin's can run it against the same disk/array that their
> PostgreSQL installation is on, and get a reasonably accurate number for
> random page cost?
>
> ie:
>
> $ ./get_calc_co
Hi all,
As an end result of all this, do we now have a decent utility by which
end user admin's can run it against the same disk/array that their
PostgreSQL installation is on, and get a reasonably accurate number for
random page cost?
ie:
$ ./get_calc_cost
Try using random_page_cost = foo
$
:
Curt Sampson wrote:
> On Wed, 11 Sep 2002, Mark Kirkwood wrote:
>
>
>
> Hm, it appears we've both been working on something similar. However,
> I've just released version 0.2 of randread, which has the following
> features:
>
funny how often that happens...( I think its often worth the effor
AMD Athlon 500
512MB Ram
IBM 120GB IDE
Tested with:
BLCKSZ=8192
TESTCYCLES=50
Result:
Collecting sizing information ...
Running random access timing test ...
Running sequential access timing test ...
Running null loop timing test ...
random test: 2541
sequential test: 2455
null timin
On Wed, 11 Sep 2002, Mark Kirkwood wrote:
> Yes...and at the risk of being accused of marketing ;-) , that is
> exactly what the 3 programs in my archive do (see previous post for url) :
Hm, it appears we've both been working on something similar. However,
I've just released version 0.2 of randr
Tom Lane wrote:
> Perhaps it's time to remind people that what we want to measure
> is the performance seen by a C program issuing write() and read()
>commands, transferring 8K at a time, on a regular Unix filesystem
Yes...and at the risk of being accused of marketing ;-) , that is
exactly what
On Tue, 10 Sep 2002, Tom Lane wrote:
> Curt Sampson <[EMAIL PROTECTED]> writes:
> > Well, for the sequential reads, the readahead should be trigerred
> > even when reading from a raw device.
>
> That strikes me as an unportable assumption.
Not only unportable: but false. :-) NetBSD, at least, do
Oliver Elphick wrote:
> Available memory (512M) exceeds the total database size, so sequential
> and random are almost the same for the second and subsequent runs.
>
> Since, in production, I would hope to have all active tables permanently
> in RAM, would there be a case for my using a page cos
OK, what you are seeing here is that for your platform the TESTCYCLES
size isn't enough; the numbers are too close to measure the difference.
I am going to increase the TESTCYCLES from 5k to 10k. That should
provide better numbers.
-
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I will run it some more tomorrow but clearly we are
> seeing reasonable numbers now.
... which still have no provable relationship to the ratio we need to
measure. See my previous comments to Curt; I don't think you can
possibly get trustworthy results
Curt Sampson <[EMAIL PROTECTED]> writes:
> Well, for the sequential reads, the readahead should be trigerred
> even when reading from a raw device.
That strikes me as an unportable assumption.
Even if true, we can't provide a test mechanism that requires root
access to run it --- raw-device test
On Mon, 2002-09-09 at 07:13, Bruce Momjian wrote:
>
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/postgresql/randcost
Latest version:
olly@linda$
random test: 14
sequential
I was attempting to measure random page cost a while ago -
I used three programs in this archive :
http://techdocs.postgresql.org/markir/download/benchtool/
It writes a single big file and seems to give more realistic
measurements ( like 6 for a Solaris scsi system and 10 for a Linux ide
one.
>OK, I have a better version at:
The script is now broken, I get:
Collecting sizing information ...
Running random access timing test ...
Running sequential access timing test ...
Running null loop timing test ...
random test: 14
sequential test: 16
null timing test: 14
random_page_cost
On Tue, 10 Sep 2002, Bruce Momjian wrote:
> Interesting that random time is increasing, while the others were
> stable. I think this may have to do with other system activity at the
> time of the test.
Actually, the random versus sequential time may also be different
depending on how many proce
= 0.50
Chris
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Bruce Momjian
> Sent: Tuesday, 10 September 2002 2:02 PM
> To: Curt Sampson
> Cc: Tom Lane; [EMAIL PROTECTED]; PostgreSQL-development; Ray Ontko
> Subject: Re: [HACKE
OK, I have a better version at:
ftp://candle.pha.pa.us/pub/postgresql/randcost
I have added a null loop which does a dd on a single file without
reading any data, and by netting that loop out of the total computation
and increasing the number of tests, I have gotten the following results
On Mon, 9 Sep 2002, Tom Lane wrote:
> Curt Sampson <[EMAIL PROTECTED]> writes:
> > On Mon, 9 Sep 2002, Tom Lane wrote:
> >> ... We are trying to measure the behavior when kernel
> >> caching is not helpful; if the database fits in RAM then you are just
> >> naturally going to get random_page_cos
Curt Sampson <[EMAIL PROTECTED]> writes:
> On Mon, 9 Sep 2002, Tom Lane wrote:
>> ... We are trying to measure the behavior when kernel
>> caching is not helpful; if the database fits in RAM then you are just
>> naturally going to get random_page_cost close to 1, because the kernel
>> will avoid
On Mon, 9 Sep 2002, Tom Lane wrote:
> Finally, I wouldn't believe the results for a moment if they were taken
> against databases that are not several times the size of physical RAM
> on the test machine, with a total I/O volume also much more than
> physical RAM. We are trying to measure the be
Nick Fankhauser wrote:
> Hi again-
>
> I bounced these numbers off of Ray Ontko here at our shop, and he pointed
> out that random page cost is measured in multiples of a sequential page
> fetch. It seems almost impossible that a random fetch would be less
> expensive than a sequential fetch, yet
"Nick Fankhauser" <[EMAIL PROTECTED]> writes:
> I bounced these numbers off of Ray Ontko here at our shop, and he pointed
> out that random page cost is measured in multiples of a sequential page
> fetch. It seems almost impossible that a random fetch would be less
> expensive than a sequential fe
PostgreSQL-development
> Cc: Ray Ontko
> Subject: Re: [HACKERS] Script to compute random page cost
>
>
> Bruce-
>
> With the change in the script that I mentioned to you off-list (which I
> believe just pointed it at our "real world" data), I got the foll
I'm getting an infinite wait on that file, could someone post it to the
list please?
On Mon, 9 Sep 2002, Bruce Momjian wrote:
>
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/pos
but
the values are in line with the results that others have been getting.
-Nick
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Bruce Momjian
> Sent: Monday, September 09, 2002 1:14 AM
> To: PostgreSQL-development
> Subject: Re: [
On Mon, 2002-09-09 at 02:13, Bruce Momjian wrote:
>
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/postgresql/randcost
>
> I get _much_ lower numbers now for random_page_cost.
The c
On Mon, 2002-09-09 at 07:13, Bruce Momjian wrote:
>
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/postgresql/randcost
>
> I get _much_ lower numbers now for random_page_cost.
>
> -
> What do other people get for this value?
>
> Keep in mind if we increase this value, we will get a more sequential
> scans vs. index scans.
With the new script I get 0.929825 on 2 IBM DTLA 5400RPM (80GB) with a 3Ware
6400 Controller (RAID-1)
Best regards,
Mario Weilguni
--
Bruce
On Mon, 9 Sep 2002, Bruce Momjian wrote:
> What do other people get for this value?
With your new script, with a 1.5 GHz Athlon, 512 MB RAM, and a nice fast
IBM 7200 RPM IDE disk, I get random_page_cost = 0.93.
> One flaw in this test is that it randomly reads blocks from different
> files
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/postgresql/randcost
>
> I get _much_ lower numbers now for random_page_cost.
I got:
random_page_cost = 1.047619
Linux kernel 2.4.18
P
7:14
> To: PostgreSQL-development
> Subject: Re: [HACKERS] Script to compute random page cost
>
>
>
> OK, turns out that the loop for sequential scan ran fewer
> times and was skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/postgre
Subject: Re: [HACKERS] Script to compute random page cost
>
>
>
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/postgresql/randcost
>
> I
OK, turns out that the loop for sequential scan ran fewer times and was
skewing the numbers. I have a new version at:
ftp://candle.pha.pa.us/pub/postgresql/randcost
I get _much_ lower numbers now for random_page_cost.
---
Because we have seen many complains about sequential vs index scans, I
wrote a script which computes the value for your OS/hardware
combination.
Under BSD/OS on one SCSI disk, I get a random_page_cost around 60. Our
current postgresql.conf default is 4.
What do other people get for this value?
34 matches
Mail list logo