mes does it eventually start running faster?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I just came across this
post while doing some research and I don't see any responses.
This seems like a mighty interesting example. I'm not sure what's
going on here, but let me guess. I think that the WindowAgg is
forcing some operation - detoasting, maybe? - to happen under the
mate
On Mon, May 13, 2013 at 9:05 AM, Desbiens, Eric wrote:
> I tried also:
>
> select * from pg_class where oid=30352481;
>
> but didn't got anything
You probably want where relfilenode=30352481.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterpris
On Mon, May 13, 2013 at 4:33 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, May 13, 2013 at 4:14 PM, Tom Lane wrote:
>>> You know, of course, that the join size estimate isn't arrived at that
>>> way. Still, this point does make it seem more like a planner
On Mon, May 13, 2013 at 4:14 PM, Tom Lane wrote:
> Robert Haas writes:
>> The planner is estimating this the outer side of this nested loop will
>> produce 33 rows and that the inner side will produce 1. One would
>> assume that the row estimate for the join product couldn
ndex Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual
> time=0.006..0.006 rows=0 loops=22)
>
>Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance =
> 999028) AND ((domain)::text
le. Of course, in those cases we
won't be able to freeze, either.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
quot;
> "Recheck Cond: (thing_id = t1.thing_id)"
> "-> Bitmap Index Scan on t5_c2_idx (cost=0.00..434.46 rows=12111
> width=0) (actual time=4.372..4.372 rows=12038 loops=1)"
> " Index Cond: (thing_id = t1.thi
fma is high or low won't matter
much: it's definitely less than vacuum_freeze_table_age.
Basically, I would guess that both the costs and the benefits of
changing this are pretty small. It would be nice to hear from someone
who has tried it, though.
--
Robert Haas
EnterpriseDB: http:
for an arbitrary non-MCV, while >= acts like > and
says, hey, there's nothing beyond the end.
Shouldn't there be a separate estimator for scalarlesel? Or should
the existing estimator be adjusted to handle the two cases
differently?
--
Robert Haas
EnterpriseDB: http://www.enterpr
hat it's OK for peak memory utilization to keep hitting
5x or more.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sun, Sep 2, 2012 at 5:39 PM, Jeff Janes wrote:
> On Thu, Aug 30, 2012 at 8:17 PM, Tom Lane wrote:
>> Robert Haas writes:
>>> On Thu, Aug 30, 2012 at 4:51 PM, Tom Lane wrote:
>>>> Bruce Momjian writes:
>>>>> On Thu, May 31, 2012 at 09:20:43AM +09
r side instead.
But only for 9.2, right? So people running back branches are still screwed.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ats required here for Metadata for improving performance ?
> (Table structures remain same)
>
> Any more on this which can help to reduce IO without affecting major
> performance
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
S
p file.
You might want to look at pg_stats.correlation for the clustered
column - that's often a good way to know whether things are ordered
the way you expect, and it's updated every time the table is analyzed.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enter
ility to generate that
kind of plan, so it would be interesting to see what happens if you
try this on 9.2beta.
Older releases should be able consider a nested loop join with
l_userqueue as the inner rel, driving an index scan over a_activity,
and then performing the join to e_usersessions a
column "SomeTextColumn" my new string 'New string to add' in the same manner
> if I would been used the following:
> Insert into "SomeTable"("SomeTextColumn") values
> (CombineString(ARRAY['abba', 'queen', 'New string
On Thu, May 31, 2012 at 10:50 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, May 31, 2012 at 10:31 AM, Tom Lane wrote:
>>> I'm not; Jeff Janes is. But you shouldn't be holding your breath
>>> anyway, since it's 9.3 material at this point.
>
&g
tement or so? I
guess we'd need to see how much that erodes the benefit, but we've
certainly done back-branch rearrangements in pg_dump in the past to
fix various kinds of issues, and this is pretty non-invasive.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgre
;t quite the fastest one I've seen, but it's close.
What hardware is Oracle running on?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subsc
d out what's causing PostgreSQL
to context-switch out - i.e. block - and therefore find out what lock
and call path is contended. LWLocks don't show up in pg_locks, so you
can't troubleshoot this sort of contention that way.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.
p a UNION ALL member subquery if it contains a join. While that could
* be fixed with a more complex data structure, at present there's not much
* point because no improvement in the plan could result.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Com
.a_id = a_id and X.B_id = b_id;
You should probably test this in your environment, but I'd expect the
view to be better. Wrapping logic inside PL/pgsql functions
needlessly rarely turn outs to be a win.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
27; and fewer with
> 'AccessShareLock'. I have no idea what I should be looking for here.
If you have lock contention, you'll see locks with granted='f', at
least from time to time. Those are the ones you want to worry about.
--
Robert Haas
EnterpriseDB: http://www.
since it was
put in.
Also:
> And one more thing that seems a bit strange - after a 1-minute run, we would
> expect to see 1700 Tuples Updated (100*17), but instead we see 1700 Tuples
> Inserted (and no deletes).
I don't think TOAST ever updates chunks in place. It just insert
p if you create the index using COLLATE "C"? Assuming
you're on 9.1.x...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscript
http://archives.postgresql.org/message-id/CANxtv6XiuiqEkXRJU2vk=xkafxrlep7uvhgr-xmcyjgqz29...@mail.gmail.com
The problem seems to be that the cost estimator doesn't know that
detoasting is expensive.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
-
g the same SQL in multiple places (causing
multiple evaluation), or even if you manage to avoid that, the system
can inline things in multiple places and produce the same effect.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-perfo
lt which is fully computed only during the
> index creation.
Look for rows where starelid is equal to the OID of the index.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
regression. It
might be worth compiling with POSIX_FADV_DONTNEED undefined and see
whether that changes anything.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chan
typically
indicate LWLock contention, but with a stock build it's pretty well
impossible to figure out which LWLock is being contended; compiling
with LWLOCK_STATS could tell ou that.
Shooting from the hip, the first thing that comes to mind is that the
index isn't fully cached in shared_buffers,
ntime: 257383.922 ms
Hmm. In the first (good) plan, the planner is using a parameterized
nestloop. So for each row it finds in dlr, it looks up
dlr.id_sms_messaggio and passes that down to the index scans, which
then pull out just the rows where sms.id takes that specific value.
In the second
that change.
You wouldn't need to bounce the production server to test that. You
could just use SET in the session you were testing from.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-perf
y willing to send all the critically
important information you need to solve the problem to the bit bucket,
as in your example. If you knew WHY it was having trouble running
pg_ctl, you would probably be able to fix it easily, but since
everything's been redirected to /dev/null, you can't
isable autovacuum for
> my application. I am planning to run vacuum command daily on that small
> table which has frequent updates.
Sounds like a bad plan.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing
oops=1)
> Index Cond: (fts @@ '( ( ( ( ( ''dexter'':A |
> ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |
> ''сезон'':A ) | '
that small
> segment, as soon as the partnum increments when walking the index, the
> buffer zeros out again for next sort group.
This has come up before and seems worthwhile, but nobody's implemented it yet.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgr
ault value. That
doesn't seem like a good idea. I would start with the default and
tune down.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
nly scans (which
will be in 9.2) are going to help you much - it might be faster, but
it's definitely not going to be anything like instantaneous.
On the flip side, if I *am* misreading the output and the number of
rows needed to compute the aggregate is actually some very small
number, then y
o happen asynchronously for a while, but
then when you get too much dirty data in the cache, it starts
blocking.
The only thing I'm fuzzy about is why it's locking so many rows, given
that the output says rows=1.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Com
125 million
rows, so that's going to take some time no matter how you slice it.
Unless I'm misreading this, it's actually taking only about 4
microseconds per row, which does not obviously suck.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Compa
On Thu, Nov 3, 2011 at 11:31 AM, Rodrigo Gonzalez
wrote:
> El 03/11/11 11:42, Robert Haas escribió:
>
> On Fri, Oct 28, 2011 at 9:39 AM, CS DBA wrote:
>
> No parameters, one of them looks like this:
>
> [ code snippet ]
>
> It's hard to believe this is the r
and status at the same
time.
It might be worth using temporary tables here - factor out sections of
the query that are referenced multiple times, like the join between
sales_order_items and invoices, and create a temporary table. ANALYZE
it, and then use it to run the main query.
--
Robert Haas
On Fri, Oct 28, 2011 at 9:39 AM, CS DBA wrote:
> No parameters, one of them looks like this:
>
> [ code snippet ]
It's hard to believe this is the real code, because SELECT without
INTO will bomb out inside a PL/pgsql function, won't it?
--
Robert Haas
nto one big query. But as the OP says,
that is decidedly less than ideal from a code-beauty-and-maintenance
point of view: people WANT to be able to use syntactic sugar and still
get good performance. Allowing for the insertion of optimization
fences is good and important but it needs to be user
On Mon, Oct 31, 2011 at 2:34 PM, Claudio Freire wrote:
> On Mon, Oct 31, 2011 at 3:24 PM, Robert Haas wrote:
>> Sure it does:
>>
>> rhaas=# create table baz (a bool, b int, c text, primary key (a, b));
>> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
&
On Mon, Oct 31, 2011 at 1:52 PM, Claudio Freire wrote:
> On Mon, Oct 31, 2011 at 2:08 PM, Robert Haas wrote:
>>> Multicolumn indices on (c1, c2, ..., cn) can only be used on where
>>> clauses involving c1..ck with k>
>> I don't think that's true. I
So it seems that, when the fk field was unchanged, nothing was done
that required accessing table a; otherwise, the access exclusive lock
held by the other session would have blocked it.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql
On Tue, Oct 11, 2011 at 8:52 PM, Claudio Freire wrote:
> On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks
> wrote:
>> Question 2) Regardless of the answer to Question 1 - if another_id is not
>> guaranteed to be unique, whereas pkey_id is – there any value to changing
>> the order of declaration
hanks!
It's written in Perl, so I would think you could get it to work. But
if not, you can always extract the big ol' query that it runs from the
script and run it some other way.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent v
ee, etc. to see what the system is actually doing while
this is running. I'd start the query up, let it run for 10 minutes or so,
and then see whether the machine is CPU-bound or I/O-bound, and whether the
amount of swap in use is growing.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
e past the end. So any app like this is going to have
> sucky performance, and kluging the corner case isn't going to help much.
It looks to me like it took 22.3 seconds to do the nested loop and
then 22.4 seconds to do the nested loop plus the sort. So the sort
itself only took 100 ms
On Oct 24, 2011, at 8:16 AM, Venkat Balaji wrote:
> Thanks Greg !
>
> Sorry for delayed response.
>
> We are actually waiting to change the checkpoint_segments in our production
> systems (waiting for the downtime).
That setting can be changed without downtime.
...Robert
--
Sent via pgsql-pe
indexable. So when you concatenate two columns from
different tables - as you say - not indexable.
In general, OR-based conditions that cross table boundaries tend to be
expensive, because they have to be applied only after performing the
join. You can't know for sure looking only at a row from one t
over
it afterwards to split it up".
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t;
>
> I don't think that is the case. Preferring a known value, where one exists,
> would provide a better estimate of the actual range of the data. Indeed, the
> var_eq_non_const in the same file (used by the nested loop join estimator)
> does essentially that.
I'm
ds vs. anything else. Large records just get broken up
into small records, under the hood. At any rate, your email is a
little vague about exactly what the problem is. If you provide some
more detail you might get more help.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Post
On Tue, Jul 19, 2011 at 5:06 PM, Stefan Keller wrote:
> 2011/7/19 Robert Haas :
>> Putting the elements in order wouldn't really help, would it? I mean,
>> you'd need some kind of an index inside the hstore... which there
>> isn't.
>
> Sorry for my
On Wed, Jul 20, 2011 at 11:57 AM, Jose Ildefonso Camargo Tolosa
wrote:
> On Tue, Jul 19, 2011 at 3:57 PM, Robert Haas wrote:
>> On Sun, Jun 19, 2011 at 10:19 PM, Jose Ildefonso Camargo Tolosa
>> wrote:
>> > So, the question is, if I were to store 8TB worth of data into l
that.
>
> What sorts of details would you like? The row count for the Result
> line is approximately correct -- the stats for all tables are up to
> date (the tables never change after import). statistics is set at 100
> currently.
The query and the full EXPLAIN output (attached a
#x27;s top
priority, because it would be a lot of work for the amount of benefit
you'd get. There's an easy workaround: store the files in the
filesystem, and a path to those files in the database.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL
gt; formerly followed by the core geometric data types!
>
> Why names? Why not rather 'operators' or 'functions'?
It's referring to the operator names.
> What does this "reversed from the convention" mean concretely?
That comment could be a little more clea
On Wed, May 11, 2011 at 4:47 PM, Lucas Madar wrote:
> On 05/11/2011 09:38 AM, Robert Haas wrote:
>>>
>>> However, if I disable seqscan (set enable_seqscan=false), I get the
>>> following plan:
>>>
>>> QUERY PLAN
>>>
>>&
t.
In 9.0+ you can do ALTER TABLE .. ALTER COLUMN .. SET (n_distinct = ...);
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
gorithm for that is
a suitable topic for a PhD thesis. :-(
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgres
d shouldn't it?
>
> In a gross mode, when hash joins go to disk, they perform very poorly.
> Maybe the planner should take that into account.
It does.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing lis
n I do to speed them up?
> 3. What can I do to enable Postgres to use a faster type of join?
IME, hash joins usually are much faster than any other type. There's
not enough information in your email to speculate as to what might be
going wrong in your particular case, though.
--
Robert
-> Bitmap Index Scan on "message_box_Idx"
(cost=0.00..536.94 rows=28858 width=0) (actual time=1.743..1.743
rows=20903 loops=113)
Index Cond: (box_id = b.id)
- Total runtime: 431520.186 ms
+ Total runtime: 6940.369 ms
That's pretty odd.
mbering correctly, but ISTM
that you've been uncomfortable with BOTH ext4 and XFS prior to RHEL6;
but OK with both beginning with RHEL6.
Also, any tips on mount options for XFS/ext4/ext3?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
s
> my primary way to spread this sort of information.
Hmm. That's rather unfortunate. +1 for revisiting that topic, if you
have the energy for it.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-p
ntation, how that's going to shake out, seems to me to be an
exercise in unjustified optimism of the first order.
Sorry to sound grumpy and pessimistic, but I really think we're
letting our enthusiasm get way, way ahead of the evidence.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.
portion to the amount of useful data in them?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tes, but with 8.4
> that's not possible.
What feature are you referring to here? Checkpoint spreading was
added in 8.3, IIRC.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.
m no explain
> expert, so someone please correct me if I'm wrong.
You are right.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
fixed by MergeAppend in 9.1. You might want to try
9.1beta1 and see if that works better for you.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
On Tue, May 17, 2011 at 11:10 AM, wrote:
> For Hstore I'm using a GIST index.
I would have thought that GIN would be a better choice for this workload.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mail
y, not just libpq.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, May 19, 2011 at 2:39 PM, Jim Nasby wrote:
> On May 19, 2011, at 9:53 AM, Robert Haas wrote:
>> On Wed, May 18, 2011 at 11:00 PM, Greg Smith wrote:
>>> Jim Nasby wrote:
>>>> I think the challenge there would be how to define the scope of the
>>>>
issing something here, but it seems like that would be
nightmarishly slow. Every time you read a tuple, you'd have to look
at every column of the tuple and determine which histogram bucket it
was in (or, presumably, which MCV it is, since those aren't included
in working out the histogr
ut
typically, the most recent data will be what is most actively
modified, and the older data will be relatively more (though not
completely) static, and less frequently accessed. Such examples are
common in many real-world applications.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
be considered also.
Your idea of dividing things by access frequency is another good
thought.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.postgresql.org/wiki/Guide_to_reporting_problems
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I
> have the advice earlier stating that read and write are treated same with
> Fio drives.
I would think more like 0.1 than 1.0.
> Any suggestions on configuration changes to have read-only hot standby
> faster on READs.
effective_io_concurrency?
Adjust OS readahead?
--
Robert
sts.
>
> So I would assume that there is still a coefficient difference between
> seeks and scans in memory until proven otherwise.
Well, anything's possible. But I wonder whether the effects you are
describing might result from a reduction in the *number* of pages
acc
believed,
but...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
even if the pages were consecutive on disk, there's no reason
to suppose they would be so in memory, and we certainly wouldn't know
one way or the other at planning time. But I agree we should add a
cached_page_cost as part of all this.
--
Robert Haas
EnterpriseDB: http://www.enterpris
On Fri, May 13, 2011 at 3:20 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Tue, Apr 26, 2011 at 9:04 PM, Merlin Moncure wrote:
>>> The very first thing to check is effective_cache_size and to set it to
>>> a reasonable value.
>
>> Actually, effective_cache_
on how this works,
and to what extent it's a problem in practice, but I am fairly sure it
can happen.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
't be. Allow the administrator to
override the result on a per-relation basis. It's difficult to
imagine a situation where the planner should assume that a relation
with only handful of pages isn't going to be cached. Even if it
isn't, as soon as someone begins accessing it,
effective_cache_size to 7gb.
Sounds like a reasonable starting point. You could certainly fiddle
around a bit - especially with shared_buffers - to see if some other
setting works better, but that should be in the ballpark.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterpris
here...
I'm a bit puzzled by all of this because the logs you posted seem to
reflect a system under very light load. Each checkpoint is writing no
more than 4% of shared_buffers and the sync phases are generally
completing in less than one second. I don't see why that would be
causin
ll be more of a support.
What's going wrong for you?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
f commit
f4e4b3274317d9ce30de7e7e5b04dece7c4e1791.
> Does the hash cost estimator bias towards smaller hashes due to hash probe
> cost increasing with hash size due to processor caching effects? Its not
> quite O(n) due to caching effects.
I don't think we account for that (and I
op
-> Seq Scan on objects
-> Append
-> Index Scan using xxx_pkey on itemXX
-> Index Scan using yyy_pkey on itemYY
-> Index Scan using zzz_pkey on itemZZ
> But it seems to think doing
> a sequential scan on the *empty* item table is excessively expensive in this
> case.
ettings for us?
If that doesn't cause the planner to use the indexes, then I'd be
suspicious that there is something wrong with those indexes that makes
the planner think it *can't* use them. It would be helpful to see the
EXPLAIN output after SET enable_seqscan=off.
--
Robe
[ woops, accidentally replied off-list, trying again ]
On Tue, May 10, 2011 at 1:47 PM, Maria L. Wilson
wrote:
> thanks for taking a look at this and it's never too late!!
>
> I've tried bumping up work_mem and did not see any improvements -
> All the indexes do exist that you asked see
f has 32G RAM with the following set in the postgres conf
> shared_buffers = 3GB
> work_mem = 64MB
> maintenance_work_mem = 512MB
> wal_buffers = 6MB
>
> let me know if I've forgotten anything! thanks a bunch!!
Late response here, but...
Is there an index on invsensor (se
sses the filesystem cache, you're
going to have only 8GB of cache, instead of some much larger amount.
More cache = better performance.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-perf
On Apr 27, 2011, at 11:01 PM, Claudio Freire wrote:
> The patch may be simple, the testing not so much. I know that.
>
> What tools do we have to do that testing? There are lots, and all
> imply a lot of work. Is that work worth the trouble? Because if it
> is... why not work?
>
> I would propos
On Apr 29, 2011, at 10:25 AM, James Mansion
wrote:
> Robert Haas wrote:
>> The server can and does measure hit rates for the PG buffer pool, but to my
>> knowledge there is no clear-cut way for PG to know whether read() is
>> satisfied from the OS cache or a drive
1 - 100 of 693 matches
Mail list logo