--Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 6655 16 + +++ 5755 12 7259 17 + +++ 5550 12
--
Jeff Frost, Owner <[EMAIL
On Wed, 14 Nov 2007, Alan Hodgson wrote:
On Tuesday 13 November 2007, Jeff Frost <[EMAIL PROTECTED]> wrote:
Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6
vs 8 drive RAID10, but I don't have those bonnie results any longer.
Version 1.03 -
On Wed, 14 Nov 2007, Merlin Moncure wrote:
On Nov 14, 2007 5:24 PM, Alan Hodgson <[EMAIL PROTECTED]> wrote:
On Tuesday 13 November 2007, Jeff Frost <[EMAIL PROTECTED]> wrote:
Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6
vs 8 drive RAID10, but I don&
On Nov 14, 2007, at 9:19 PM, Jeff Frost wrote:
On an 8xRAID10 volume with the smaller Areca controller we were
seeing around 450 seeks/sec.
On our 6 disk raid10 on a 3ware 9550sx I'm able to get about 120 seek
+ reads/sec per process, with an aggregate up to about 500 or so.
ifference.
There's still the hit of performing a CLUSTER, however.
Another option, if you have a relatively small number of topic_ids, is
to break it into separate tables, one for each topic_id.
Regards,
Jeff Davis
---(end of broadcast)-
e use of a DOMAIN type may be one of those 'special
cases' forcing the planner to perform the union first, then apply the
conditions.
Jeff
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Fri, 2007-12-14 at 11:29 -0800, Jeff Davis wrote:
> "bigtable" has about 60M records, about 2M of which are dead at the time
> of VACUUM. Shared_buffers are about 1GB, and the machine has 4GB of
> memory.
Forgot to mention: version 8.2.4
Regards,
000 lseeks in 10 seconds, would it?
Where am I going wrong? Are many of these lseeks no-ops or something?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Fri, 2007-12-14 at 19:04 -0500, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > Where am I going wrong? Are many of these lseeks no-ops or something?
>
> They're not supposed to be, but if you only tracked seeks and not
> reads or writes, it's har
may be bad in your case is if you have many backends
commit many transactions asynchronously, and then the WAL writer tries
to make those transactions durable, CFQ might think that the WAL writer
is "unfairly" using a lot of I/O. This is just speculation though.
Regards,
Jeff Davis
bin
> very well at all. On top of that, the database scale should be bigger
He was referring to the CFQ I/O scheduler. I don't think that will
affect pgbench itself, because it doesn't read/write to disk, right?
Regards,
Jeff Davis
s it). But they can take a lot of processor time to build up
again, especially with localized text.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
your cores. Build your primary key/unique indexes first,
and then after those are built you can start using the database while
the rest of the indexes are building (use "CREATE INDEX CONCURRENTLY").
Regards,
Jeff Davis
---(end of broadcast)-
ot of indexes on localized
text using only one core at a time.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
ore mult-processed, then
> we can possibly add more parallelism to the restore process.
I like this approach more. I think that pg_restore is the right place to
do this, if we can make the options reasonably simple enough to use.
See:
http://archives.postgresql.org/pgsql-hackers/
On Wed, 2008-02-20 at 18:18 +, Matthew wrote:
> On Wed, 20 Feb 2008, Jeff Davis wrote:
> > However, building indexes in parallel would allow better CPU
> > utilization.
>
> We have a process here that dumps a large quantity of data into an empty
> database, much l
he index?
>
> No, what makes you think that? The index won't change at all in the
> above example. The major problem is, as Scott says, that DROP INDEX
> takes exclusive lock on the table so any other sessions will be locked
> out of it for the duration of your test query.
I'm not sure how much of a gain this is, because right now that could
be accomplished by increasing the statistics for that column (and
therefore all of your distinct values would fit in the MCV list). Also
the statistics aren't guaranteed to be perfectly up-to-date, so an
estimate of zero migh
0 on the server with the 4.59 cost
estimate.
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to you
right. I probably didn't mention that the slow one has been
analyzed several times. In fact, every time adjusted the statistics target
for that column I analyzed, thus the eventually better, but still inaccurate
estimates toward the bottom of the post.
--
Jeff Frost, Owner <[EMAIL
Tom Lane wrote:
Jeff Frost <[EMAIL PROTECTED]> writes:
I have two postgresql servers. One runs 8.3.1, the other 8.3.3. On the 8.3.1
machine, the index scans are being planned extremely low cost:
Index Scan using ix_email_entity_thread on email_entity (cost=0.00..4.59
t, killing a process doesn't free shared memory, so it's
just flat out broken.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
memory. And there is already a
system-wide limit on shared memory. So what's the point of such a bad
design?
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
index scan with the filter (311ms per loop), the
"player" condition must be very selective, but PostgreSQL doesn't care
because it already thinks that the date range is selective.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@p
On Fri, 2008-09-19 at 11:25 -0700, Jeff Davis wrote:
> What's the n_distinct for start_time?
Actually, I take that back. Apparently, PostgreSQL can't change "x
BETWEEN y AND y" into "x=y", so PostgreSQL can't use n_distinct at all.
That's your problem.
3.4. I didn't see
any mention of a fix for this sort of thing in 8.3.4's release notes. I
was wondering if this is a known bug in 8.3.3 (and maybe other 8.3.x
versions) and just didn't make it into the release notes of 8.3.4?
--
Jeff Frost, Owner <
Tom Lane wrote:
> Jeff Frost <[EMAIL PROTECTED]> writes:
>
>> I've run across a strange problem with PG 8.3.3 not using indexes on a
>> particular table after building the table during a transaction.
>>
>
> This may be a HOT side-effect ... is
Hmm ...
> I'm not sure if that's sufficient if there are other concurrent
> transactions; but it's certainly necessary.) Another possibility is
> to create the indexes just after data load, before you start updating
> the columns they're on.
>
>
Tha
s that making the indexes before the updates seems to make the
planner happy!
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
Tom Lane wrote:
> Jeff Frost <[EMAIL PROTECTED]> writes:
>
>> On Thu, 30 Oct 2008, Tom Lane wrote:
>>
>>>> Any idea why I don't see it on 8.3.4?
>>>>
>>> I think it's more likely some small difference in your t
On Fri, 31 Oct 2008, Gregory Stark wrote:
Tom Lane <[EMAIL PROTECTED]> writes:
Jeff Frost <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Huh. That does sound like it's a version-to-version difference.
There's nothing in the CVS log that seems related though. Are you
wil
the "ANALYZE" and just do
"EXPLAIN ...".
Then post those results to the list. These tell us what plans PostgreSQL
is choosing and what it estimates the costs to be. If it's the output of
EXPLAIN ANALYZE, it also runs the query and tells us what the costs
really are.
>F
columns? Also, why
does the index on content_id win out over the compound index on
(content_type, content_id)?
"index_blips_on_content_id" btree (content_id)
"index_blips_on_content_type_and_content_id" btree (content_type,
content_id)
--
Jeff Frost, Owner
Fr
On Tue, 13 Jan 2009, Tom Lane wrote:
Jeff Frost writes:
So, my question is, should changing the stats target on the shape column
affect the stats for the content_id and content_type columns?
It would change the size of the sample for the table, which might
improve the accuracy of the stats
On Tue, 13 Jan 2009, Tom Lane wrote:
Jeff Frost writes:
On Tue, 13 Jan 2009, Tom Lane wrote:
It would change the size of the sample for the table, which might
improve the accuracy of the stats. IIRC you'd still get the same number
of histogram entries and most-common-values for the
n look for
opportunities to set hint bits or freeze tuples.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
arge in-memory sort or hash join at the same time.
It is also a bit more than a standard connection pooler, because
multiple connections can
be in the middle of non-read-only transactions on the same backend at
the same time. I
don't think client-based pools allow that.
Jeff
On Fri, Aug 14, 2009 at 4:21 PM, Tom Lane wrote:
> Jeff Janes writes:
>> I apologize if it is bad form to respond to a message that is two
>> months old, but I did not see this question answered elsewhere and
>> thought it would be helpful to have it answered. This my r
of WAL space has a nasty effect on
group commits.
The default value of wal_buffers is low because many older systems
have a low default value for the kernel setting shmmax. On any
decent-sized server, I'd just automatically increase wal_buffers to 1
or 2 MB. It might help and lot, an
On Sun, Aug 23, 2009 at 1:25 PM, Jaime
Casanova wrote:
> On Thu, Aug 20, 2009 at 11:38 PM, Jeff Janes wrote:
>>> -- Forwarded message --
>>> From: Jaime Casanova
>>> To: psql performance list
>>> Date: Wed, 19 Aug 2009 19:25:11 -050
ch dirty data
sitting in the disk cache. Once it reaches that limit, user processes doing
writes start blocking while the kernel flushes stuff on their behalf.
Jeff
On Sat, Aug 29, 2009 at 6:26 AM, Merlin Moncure wrote:
> On Fri, Aug 28, 2009 at 8:19 PM, Jeff Janes wrote:
> >> -- Forwarded message --
> >> From: Joseph S
> >> To: Greg Smith , pgsql-performance@postgresql.org
> >> Date: Fri, 28 Aug 200
>
Why not just spread all your index data over 14 spindles, and do the same
with your table data? I haven't encountered this debate in in the pgsql
world, but from the Oracle world it seems to me the "Stripe And Mirror
Everything" people had the better argument than the "separate tables and
indexes" people.
Jeff
or them? If so,
how?
I thought gprof was specific to GNU compilers.
Jeff
erent buffers. I thought that the sequential scan would have to
break stride when it encountered a block already in buffer. But I
haven't looked at the code, maybe I am over analogizing to other
software I'm familiar with.
Jeff
--
Sent via pgsql-performance mailing list (pgsql-perform
nning different queries, and just one of them
really gets a big benefit from the extra memory, but the rest just use
it because they think they have it even though it is only a small
benefit, then bumping up just for the query that gets a big
improvement could work.
Jeff
--
Sent via pgsql-performa
On Sat, Sep 26, 2009 at 9:57 AM, Gerhard Wiesinger wrote:
> On Sat, 26 Sep 2009, Greg Smith wrote:
>
>> On Fri, 25 Sep 2009, Jeff Janes wrote:
>>
>>> Does it do this even if the block was already in shared_buffers?
>>
>> Usually not. The buffer ring alg
ce the amount of I/O needed, it would
just shift the I/O from checkpoints to the backends themselves.
It looks like checkpoint_completion_target was introduced in 8.3.0
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to you
other tables but read-only with respect to the GIS?
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
u_tuple_cost might help
motivate it to make that decision without having to resort to
enable_seqscan. Of course tuning those setting just to focus on one
query could backfire rather badly.
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
in is done anyway, even if its removed (At least on
>> Postgres 8.3). [...]
>
> How could that be done otherwise? PostgreSQL *must* look at
> country to determine how many rows the left join produces.
Even if country.id is a primary or unique key?
Jeff
--
Sent via pgsql-performance
major
problems.
Personally, I'd like to see the GIT feature finished as well. When I
have time, I was planning to take a look into it.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
only disadvantage is that it's more metadata to manage -- all of the
existing data like dictionaries and stop words, plus this new "common
words". Also, it would mean that almost every match requires RECHECK. It
would be interesting to know how common a word needs to be before i
here might
be a lot of common words you'd like to track.
Perhaps ANALYZE can automatically add the common words above some
frequency threshold to the list?
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
second plan you only see
a single index scan with two quals:
Index Cond: ((ftsbody_body_fts @@
to_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@
to_tsquery('spellerror'::text)))
So it's entirely up to GIN how to execute that.
Regards,
Jeff Da
e real problem is that there's a difference between these
cases at all? I don't see any reason why the first should be more
expensive than the second.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
missing something.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
page and cpu costs toward the other direction.
Is that because the database is mostly cached in memory? If I take the
documented descriptions of the costs parameters at face value, I find
that cpu_tuple_cost should be even lower yet.
Cheer,
Jeff
--
Sent via pgsql-performance mailing list (pgsql
the database, and how to
delete them out? Do you already know what indexes, if any, should be
on the table?
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
s done but 5ms
the third time? Isn't it the same index scan each time? Or does the
change in queue.status change the plan?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
r transaction has scattered its transaction id into
various xmin and xmax over many tables, you need an atomic, durable
repository to decide if that id has or has not committed. Maybe clog
fsynced on commit would serve this purpose?
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
urally
compact after the rows at the end are removed.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
What's your opinion on the practical
performance impact? If it doesn't need to be fsync'd, the kernel
probably shouldn't have written it to the disk yet anyway, right (I'm
assuming here that the OS buffer cache is much larger than the disk
write cache)?
Regards,
Jeff
n (again, not intended as scheduler
benchmarks). The server was modified to record a log message every N
page accesses.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the same (the
default); and the optimizer settings (costs in postgresql.conf) are the
same.
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www
e the entire database.
While 64bit is in our long-term future, we're willing to stick with
32bit Linux until 64bit Linux on Itanium/Opteron and 64bit PostgreSQL
"settle in" to proven production-quality.
TIA,
- Jeff
--
Jeff Bohmer
VisionLink,
e'd factor in the penalty from PAE for
now and have the performance boost from moving to 64bit available on
demand. Not having to build another DB server in a year would also
be nice.
FYI, we need stability first and performance second.
Thank you,
- Jeff
--
Jeff Bohmer
VisionLink, Inc.
__
We'll probably
get battery-backed write cache and start out with a 4 disk RAID 10
array. Then add more disks and change RAID 5 if more read
performance is needed.
Thanks,
- Jeff
--
Jeff Bohmer
VisionLink, Inc.
_
303.402.0170
www.visi
looking at 32- vs. 64-bit just for memory
reasons and not 64-bit apps.
- Jeff
--
Jeff Bohmer
VisionLink, Inc.
_
303.402.0170
www.visionlink.org
_
People. Tools. Change. Community.
---(end of broadcast
3-10/msg01343.php
This post indicates that SATA drives still have problems, but a new
ATA standard might fix things in the future:
http://archives.postgresql.org/pgsql-general/2003-10/msg01395.php
SATA RAID is a good option for a testing server, though.
- Jeff
--
Jeff Bohmer
VisionLink, Inc.
ine until you can do
it easily. You will learn a lot and the experience might prove
invaluable in may ways :-)
Jeff
---(end of broadcast)---
TIP 8: explain analyze is your friend
t sort_mem = 80) then only
that session will use the looney sort_mem
It would be interesting to know if your machine is swapping.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)--
e: 18.125 ms
(i.e., apply the function to the data in the temp table), it runs a
whole lot faster! Is this a bug in the optimizer? Or did something
change about the way functional indexes are used?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer
quite
difficult, and I don't know how the system will do it.
However, when I create and populate simple tables based on your
description, I get the index scan being the lower estimated cost. So
the tables I built are not sufficient to study the matter in detail.
Cheers,
Jeff
--
Sent v
f
new rows are added to each table during overlapping transaction, the
new ads against new docs comparison will not actually happen. You
will probably need to add manual locking to get around this problem.
Cheers
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sat, Apr 30, 2011 at 3:29 PM, Joel Reymont wrote:
>
> On Apr 30, 2011, at 11:11 PM, Jeff Janes wrote:
>
>> But what exactly are you inserting? The queries you reported below
>> are not the same as the ones you originally described.
>
> I posted the wrong query ini
full table scan rather than index scan, then you can "set
enable_seqscan=off" try to force the index scan.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, May 12, 2011 at 8:53 AM, Prodan, Andrei
wrote:
>
> @Jeff: thank you for the clear plan interpretation - but I'm afraid I
> don't really understand the second bit:
> 1) I provided the GOOD plan, so we already know what postgres thinks,
> right? (Later edit: gue
t-spot unless the underlying
table had one as well? (Of course the root block will be a hot-spot,
but certainly not 90% of all requests)
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nteresting thing to test.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
gt; Round_Action is the parent table and has no record in the tables, all
> the records are lying in child tables.
Run EXPLAIN ANALYZE on each of those queries, and post the results.
See http://wiki.postgresql.org/wiki/SlowQueryQuestions for a guide on
how to give the necessary information for ot
gt; ='docmeta1';
> relpages | reltuples
> --+---
> 18951 | 329940
What the are sizes of associated toast tables for the tsvector columns?
>
> lawdb=# explain analyze select * from docmeta1 where docvector @@
> plainto_tsquery('english
der to get the tsvector data on which to apply the
operator. So increasing the cost of @@ might very well be the best
immediate solution, but should the cost estimation code be changed to
explicitly take page reads associated with toast into account, so that
cost of @@ itself and can remai
er.
Seeing as it's around a couple ms at minimum, it's probably some kind of
IO latency. You could see that by wrapping the statements in a big
transaction (BEGIN/END block) -- I bet the inserts go very quickly and
the final commit takes longer.
Regards,
Jeff Davis
--
Sent via p
On 7/12/11, lars wrote:
>
>
> The fact that a select (maybe a big analytical query we'll run) touching
> many rows will update the WAL and wait
> (apparently) for that IO to complete is making a fully cached database
> far less useful.
> I just artificially created this scenario.
I can't think of
uldn't do any HOT logic.
Once the update has been run once, further executions are degenerate
(they replace the updated indexed column with the same value it
already holds). The HOT code detects this and uses a HOT update in
this case despite the apparent update of an indexed column.
Cheers,
ich followed the
>> UPDATE?
>
> It has surprisingly little impact on the SELECT side:
If your fsync is truly fsyncing, it seems like it should have
considerable effect.
Could you strace with both -ttt and -T, with and without synchronous commit?
Cheers,
Jeff
--
Sent via
On Fri, Jul 15, 2011 at 5:21 PM, lars wrote:
> On 07/14/2011 04:03 PM, Jeff Janes wrote:
>>
>> On Wed, Jul 13, 2011 at 3:41 PM, lars wrote:
>>>
>>> On 07/13/2011 11:42 AM, Kevin Grittner wrote:
>>>>
>>>> So transactions without an XID *ar
ynchronous commit did not improve performance, it is
not directly relevant for you.
> 2. There should be an option do not perform these compactions if the page is
> only touched by reads.
If the page is only touched by reads, there would be nothing to compact.
Cheers,
Jeff
--
Sent via pgsql-
ases.
...
> If I disable the hashjoin, I get massive improvements on PG 9.x ... as fast
> (or faster) than our PG 8.4 instance.
Can you include buffers in your explain analyze? Also, what is the
plan for the new server when hashjoin is disabled?
What if you lower random_page_cost to 1 (o
s concurrency of selects at all, only of inserts.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Especially if the number of buckets could be enlarged
by concurrently making a new, larger, index and then dropping the old one.
I've only thought about proposing it, because currently I don't have time
to do anything on it if the proposal was well received.
Cheers,
Jeff
--
Sent
ows are basically going to be
in order in the table, which means the btree index accesses are going to
visit the same block over and over again before going to the next block.
With hash indexes, it will jump all over the place.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-pe
On Sun, Sep 18, 2011 at 7:59 AM, Stefan Keller wrote:
> Merlin and Jeff,
>
> General remark again:It's hard for me to imagine that btree is
> superior for all the issues mentioned before. I still believe in hash
> index for primary keys and certain unique constraints where
es
> overloaded. Since each signature has 9 bits set, you can typically have
> 5-10 hash collisions (a lot of signatures ORed together in each record's
> DBsig) before the false-positive rate of the signature test gets too high.
But why not just distribute those 32/(5 to 10) bits
an, so that you
don't need to inhibit splits during a scan of the same bucket. This
would probably be easy if there were no overflow pages. But the
overflow pages get shuffled in with each other and with the main
bucket page during a split. It would take quite some gymnastics to
get arou
:
>
> SELECT * FROM y WHERE EXISTS (SELECT 1 FROM z WHERE z.y_id = y.id)
>
> ... or vice versa depending on which it thought would be faster.
Although those two queries are semantically the same (I think), a lot of
very similar pairs of queries are not equivalent. For instance, if it
t you
need using a SELECT (or make it a view for convenience)? Fundamentally,
these records aren't changing, you are just trying to interpret them in
the context of the current day. That should be done using a SELECT, not
an UPDATE.
Regards,
Jeff Davis
--
Sent via pgsql-performance ma
those ones
anyway, so losing a few more retroactively may not be a big deal.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t;SELECT 1" might not send the
stats messages right away, and they might end up getting lost.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
301 - 400 of 878 matches
Mail list logo