Merlin Moncure wrote:
> > although for postgres the thing that you are doing the fsync on is the WAL
> > log file. that is a single (usually) contiguous file. As such it is very
> > efficiant to write large chunks of it. so while you will degrade from the
> > battery-only mode, the fact that the co
On Tue, 2008-09-23 at 13:02 -0400, Bruce Momjian wrote:
> Merlin Moncure wrote:
> > > although for postgres the thing that you are doing the fsync on is the WAL
> > > log file. that is a single (usually) contiguous file. As such it is very
> > > efficiant to write large chunks of it. so while you
Hello,
I'm running into performance issues with various queries on a
PostgreSQL database (of books). I'm having trouble understanding the
thinking behind the query planner in this scenario:
http://dpaste.com/hold/80101/
(also attached at bottom of email)
Relation sizes:
dimension_books: 19
On Tue, Sep 23, 2008 at 2:22 PM, Colin Copeland <[EMAIL PROTECTED]> wrote:
> dimension=# EXPLAIN ANALYZE
> SELECT DISTINCT ON ("dimension_book"."call")
> "dimension_book"."title"
> FROM "dimension_book"
> INNER JOIN "dimension_library_books"
> ON ("dimension_book"."i
On Sep 23, 2008, at 6:07 PM, Richard Broersma wrote:
On Tue, Sep 23, 2008 at 2:22 PM, Colin Copeland <[EMAIL PROTECTED]
> wrote:
dimension=# EXPLAIN ANALYZE
SELECT DISTINCT ON ("dimension_book"."call")
"dimension_book"."title"
FROM "dimension_book"
INNER JOIN "dimension_lib
On Tue, Sep 23, 2008 at 3:25 PM, Colin Copeland <[EMAIL PROTECTED]> wrote:
>>> dimension=# EXPLAIN ANALYZE
>>> SELECT DISTINCT ON ("dimension_book"."call")
>>> "dimension_book"."title"
>>> FROM "dimension_book"
>>> INNER JOIN "dimension_library_books"
>>> ON ("dimension_b
When displaying information about information about an user in our
site, I noticed an unreasonable slowdown. The culprit turned out to be
a trivial select, which determines the number of comments left by an
user:
select count(*) from comments where created_by=80 and status=1;
The comments table
Einars wrote:
> As query plan shows, it got the
> correct answer, 15888, very fast: the rest of the 13 seconds it's just
> rechecking all the comments for some weird reasons.
I'd already written: "If you need the test for status = 1, consider a
partial index" when I noticed your schema definition:
Einars <[EMAIL PROTECTED]> writes:
> When displaying information about information about an user in our
> site, I noticed an unreasonable slowdown. The culprit turned out to be
> a trivial select, which determines the number of comments left by an
> user:
I don't see anything even slightly wrong h
Craig Ringer <[EMAIL PROTECTED]> writes:
> I'd already written: "If you need the test for status = 1, consider a
> partial index" when I noticed your schema definition:
>> "comments_created_by" btree (created_by) WHERE status = 1
> I find it hard to guess why it's having to recheck the WHERE clau
Tom Lane wrote:
Craig Ringer <[EMAIL PROTECTED]> writes:
I'd already written: "If you need the test for status = 1, consider a
partial index" when I noticed your schema definition:
"comments_created_by" btree (created_by) WHERE status = 1
I find it hard to guess why it's having to recheck
Greg Smith wrote:
> On Mon, 8 Sep 2008, Merlin Moncure wrote:
>
> > What's interesting about the X25 is that they managed to pull the
> > numbers they got out of a MLC flash product. They managed this with a
> > DRAM buffer and the custom controller.
>
> I finally found a good analysis of what's
Einars, you may be able to force a query plan similar to the first one, on
the second data set, if you decrease the random page cost or disable bitmap
scans. If a regular index scan is faster than the bitmap scan with the same
query returning the same results, there may be some benefit that can be
A fantastic review on this issue appeared in July:
http://www.alternativerecursion.info/?p=106
And then the same tests on a RiData SSD show that they are the same drive
with the same characteristics:
http://www.alternativerecursion.info/?p=276
Most blamed it on MLC being "slow" to write compared t
14 matches
Mail list logo