Matthew Wakeling <[EMAIL PROTECTED]> writes:
In order to improve the performance, I made the system look ahead in the
source, in groups of a thousand entries, so instead of running:
SELECT * FROM table WHERE field = 'something';
a thousand times, we now run:
SELECT * FROM table WHERE field IN ('
On Fri, 19 Sep 2008, Tom Lane wrote:
Your example shows the IN-list as being sorted, but I wonder whether you
actually are sorting the items in practice? If not, you might try that
to improve locality of access to the index.
Well, like I said, we generally don't have the luxury of dictating th
Matthew Wakeling <[EMAIL PROTECTED]> writes:
> In order to improve the performance, I made the system look ahead in the
> source, in groups of a thousand entries, so instead of running:
> SELECT * FROM table WHERE field = 'something';
> a thousand times, we now run:
> SELECT * FROM table WHERE fie
On Thu, 18 Sep 2008, Greg Smith wrote:
It's just that our system is doing a lot of bitmap index scans at the
moment, and it'd help to be able to spread them across the 16 discs in the
RAID array. It's the bottleneck in our system at the moment.
If you have some specific bitmap index scan test
On Thu, Sep 18, 2008 at 1:30 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> If you have some specific bitmap index scan test case suggestions you can
> pass along (either publicly or in private to me, I can probably help
> anonymize them), that's one of the things that has been holding this up.
> Alte
On Thu, 18 Sep 2008, Matthew Wakeling wrote:
On Tue, 29 Jan 2008, Gregory Stark wrote:
I have a patch which implements it for the low hanging fruit of bitmap
index scans. it does it using an extra trip through the buffer manager
which is the least invasive approach but not necessarily the be
On Tue, 29 Jan 2008, Gregory Stark wrote:
So, this is FYI, and also an added encouragement to implement fadvise
prefetching in some form or another. How's that going by the way?
I have a patch which implements it for the low hanging fruit of bitmap index
scans. it does it using an extra trip th
>>> On Tue, Jan 29, 2008 at 10:45 AM, in message
<[EMAIL PROTECTED]>, Gregory Stark <[EMAIL PROTECTED]> wrote:
> Well consider when you've reached n-1 drives; the expected number of requests
> before you hit the 1 idle drive remaining out of n would be n requests. When
> you're at n-2 the expect
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> >>> On Tue, Jan 29, 2008 at 9:52 AM, in message
> <[EMAIL PROTECTED]>, Gregory Stark <[EMAIL PROTECTED]>
> wrote:
>
> > I got this from a back-of-the-envelope calculation which now that I'm trying
> > to reproduce it seems to be wrong. Previously
>>> On Tue, Jan 29, 2008 at 9:52 AM, in message
<[EMAIL PROTECTED]>, Gregory Stark <[EMAIL PROTECTED]>
wrote:
> I got this from a back-of-the-envelope calculation which now that I'm trying
> to reproduce it seems to be wrong. Previously I thought it was n(n+1)/2 or
> about n^2/2. So at 16 I wou
"Matthew" <[EMAIL PROTECTED]> writes:
> On Tue, 29 Jan 2008, Gregory Stark wrote:
>>> This was with 8192 random requests of size 8192 bytes from an 80GB test
>>> file.
>>> Unsorted requests ranged from 1.8 MB/s with no prefetching to 28MB/s with
>>> lots
>>> of prefetching. Sorted requests went
On Tue, 29 Jan 2008, Gregory Stark wrote:
This was with 8192 random requests of size 8192 bytes from an 80GB test file.
Unsorted requests ranged from 1.8 MB/s with no prefetching to 28MB/s with lots
of prefetching. Sorted requests went from 2.4MB/s to 38MB/s. That's almost
exactly 16x improvement
"Matthew" <[EMAIL PROTECTED]> writes:
> On Tue, 4 Dec 2007, Gregory Stark wrote:
>> FWIW I posted some numbers from a synthetic case to pgsql-hackers
>>
>> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00088.php
>...
> This was with 8192 random requests of size 8192 bytes from an 80GB te
Mark Mielke wrote:
> Asynchronous I/O is no more a magic bullet than threading. It requires a
> lot of work to get it right, and if one gets it wrong, it can be slower
> than the regular I/O or single threaded scenarios. Both look sexy on
> paper, neither may be the solution to your problem. Or
On Tue, 4 Dec 2007, Mark Mielke wrote:
This is bikeshed land, right?
I am only interested by juicy projects that have a hope of success. This
subject does interest me - I am hoping my devil's advocate participation
encourages people to seek a practical implementation that will benefit me.
Na
James Mansion wrote:
Mark Mielke wrote:
PostgreSQL or the kernel should already have the hottest pages in
memory, so the value of doing async I/O is very likely the cooler
pages that are unique to the query. We don't know what the cooler
pages are until we follow three tree down.
I'm assumin
Mark Mielke wrote:
PostgreSQL or the kernel should already have the hottest pages in
memory, so the value of doing async I/O is very likely the cooler
pages that are unique to the query. We don't know what the cooler
pages are until we follow three tree down.
I'm assuming that at the time we
James Mansion wrote:
Mark Mielke wrote:
At a minimum, this breaks your query into: 1) Preload all the index
pages you will need
Isn't this fairly predictable - the planner has chosen the index so it
will be operating
on a bounded subset.
What is the bounded subset? It is bounded by the value.
Mark Mielke wrote:
At a minimum, this breaks your query into: 1) Preload all the index
pages you will need
Isn't this fairly predictable - the planner has chosen the index so it
will be operating
on a bounded subset.
, 2) Scan the index pages you needed
Yes, and AIO helps when you can scan the
"Matthew" <[EMAIL PROTECTED]> writes:
> On Tue, 4 Dec 2007, Mark Mielke wrote:
>> So much excitement and zeal - refreshing to see. And yet, no numbers! :-)
>
> What sort of numbers did you want to see?
FWIW I posted some numbers from a synthetic case to pgsql-hackers
http://archives.postgresql.o
On Tue, 4 Dec 2007, Mark Mielke wrote:
> So much excitement and zeal - refreshing to see. And yet, no numbers! :-)
What sort of numbers did you want to see?
> You describe a new asynchronous I/O system to map integers to Java
> objects above. Why would you write this? Have you tried BerkeleyDB or
Matthew wrote:
On Tue, 4 Dec 2007, Gregory Stark wrote:
Fwiw, what made you bring up this topic now? You're the second person in about
two days to bring up precisely this issue and it was an issue I had been
planning to bring up on -hackers as it was.
I only just joined the performance
On Tue, 4 Dec 2007, Gregory Stark wrote:
> Fwiw, what made you bring up this topic now? You're the second person in about
> two days to bring up precisely this issue and it was an issue I had been
> planning to bring up on -hackers as it was.
I only just joined the performance mailing list to talk
Matthew wrote:
On Tue, 4 Dec 2007, Mark Mielke wrote:
The larger the set of requests, the closer the performance will scale to
the number of discs
This assumes that you can know which pages to fetch ahead of time -
which you do not except for sequential read of a single table.
T
James Mansion wrote:
Mark Mielke wrote:
This assumes that you can know which pages to fetch ahead of time -
which you do not except for sequential read of a single table.
Why doesn't it help to issue IO ahead-of-time requests when you are
scanning an index? You can read-ahead
in index pages, a
On Tue, 4 Dec 2007, Mark Mielke wrote:
> > The larger the set of requests, the closer the performance will scale to
> > the number of discs
>
> This assumes that you can know which pages to fetch ahead of time -
> which you do not except for sequential read of a single table.
There are circumstanc
Mark Mielke wrote:
This assumes that you can know which pages to fetch ahead of time -
which you do not except for sequential read of a single table.
Why doesn't it help to issue IO ahead-of-time requests when you are
scanning an index? You can read-ahead
in index pages, and submit requests f
[EMAIL PROTECTED] wrote:
So, if you hand requests one by one to the disc, it will almost always be
faster to order them. On the other hand, if you hand a huge long list of
requests to a decent SCSI or SATA-NCQ disc in one go, it will reorder the
reads itself, and it will do it much better than yo
Matthew wrote:
On Tue, 4 Dec 2007, Gregory Stark wrote:
Also, it's true, you need to preread more than 12 blocks to handle a 12-disk
raid. My offhand combinatorics analysis seems to indicate you would expect to
need to n(n-1)/2 blocks on average before you've hit all the blocks. There's
littl
Fwiw, what made you bring up this topic now? You're the second person in about
two days to bring up precisely this issue and it was an issue I had been
planning to bring up on -hackers as it was.
"Matthew" <[EMAIL PROTECTED]> writes:
> Kind of. The system cache is just a method to make it simple
On Tue, 4 Dec 2007, Gregory Stark wrote:
> Also, it's true, you need to preread more than 12 blocks to handle a 12-disk
> raid. My offhand combinatorics analysis seems to indicate you would expect to
> need to n(n-1)/2 blocks on average before you've hit all the blocks. There's
> little penalty to
On Tue, 4 Dec 2007, Mark Mielke wrote:
> The disk head has less theoretical distance to travel if always moving
> in a single direction instead of randomly seeking back and forth.
True... and false. The head can move pretty quickly, and it also has
rotational latency and settling time to deal with
"Mark Mielke" <[EMAIL PROTECTED]> writes:
> Matthew wrote:
>
>> I don't think you would have to create a more intelligent table scanning
>> algorithm. What you would need to do is take the results of the index,
>> convert that to a list of page fetches, then pass that list to the OS as
>> an asyn
Matthew wrote:
On Tue, 4 Dec 2007, Mark Mielke wrote:
The bitmap scan method does ordered reads of the table, which can
partially take advantage of sequential reads. Not sure whether bitmap
scan is optimal for your situation or whether your situation would allow
this to be taken advantage of.
On Tue, 4 Dec 2007, Mark Mielke wrote:
> The bitmap scan method does ordered reads of the table, which can
> partially take advantage of sequential reads. Not sure whether bitmap
> scan is optimal for your situation or whether your situation would allow
> this to be taken advantage of.
Bitmap scan
Matthew wrote:
On Tue, 4 Dec 2007, Gregory Stark wrote:
"Matthew" <[EMAIL PROTECTED]> writes
Does Postgres issue requests to each random access in turn, waiting for
each one to complete before issuing the next request (in which case the
performance will not exceed that of a single disc), or
On Tue, 4 Dec 2007, Gregory Stark wrote:
> "Matthew" <[EMAIL PROTECTED]> writes:
>
> > Does Postgres issue requests to each random access in turn, waiting for
> > each one to complete before issuing the next request (in which case the
> > performance will not exceed that of a single disc), or does
"Matthew" <[EMAIL PROTECTED]> writes:
> Does Postgres issue requests to each random access in turn, waiting for
> each one to complete before issuing the next request (in which case the
> performance will not exceed that of a single disc), or does it use some
> clever asynchronous access method to
I have a question about how Postgres makes use of RAID arrays for
performance, because we are considering buying a 12-disc array for
performance reasons. I'm interested in how the performance scales with the
number of discs in the array.
Now, I know that for an OLTP workload (in other words, lots
39 matches
Mail list logo