On Wed, May 29, 2013 at 6:47 PM, Alan Hodgson wrote:
> On Wednesday, May 29, 2013 06:25:21 PM Daniele Varrazzo wrote:
>> My solution has been to become pg_repack maintainer. YMMV. Just don't
>> expect vacuum to reduce the indexes size: it doesn't.
>
> It's not
On Wed, 2013-05-29 at 19:12 +0200, Jesper Krogh wrote:
> Second, if the new index is significantly smaller than the old on, I
> suggest that you try to crank up the autovacuum daemon instead of
> blindly dropping and creating indexes, this will help to mitigate the
> bloat you're seeing accumul
On Mon, Jul 23, 2012 at 4:07 PM, Tom Lane wrote:
> Daniele Varrazzo writes:
>> Since "big" was sharded, the query plan results in something like:
>> [ugly]
>
> [ squint... ] 9.1 certainly ought to be able to find a smarter plan for
> such a case. For instanc
On Mon, Jul 23, 2012 at 11:03 AM, Daniele Varrazzo
wrote:
> 1. Can we fix the queries to work around this problem?
As a stop-gap measure I've defined a get_big(id) function and using it
to pull in the details we're interested into from the "big" table:
create f
Hello,
We are using Postgres 9.1.4. We are struggling with a class of queries
that got impossible to run after sharding a large table. Everything
like:
select small.something, big.anything
from small join big on small.big_id = big.id;
and variation such as "select * from big where id in
Hello,
We have a table with about 60M records, almost all of which in one of
two statuses ('done', 'failed') and a few of them, usually < 1000, in
different transient statuses. We also have a partial index indexing
the transient items: where status not in ('done', 'failed'). Stats are
about right
Hello,
we are experiencing some performance degradation on a database where
the main table is running towards the 100M record. Together with the
slowness of the queries I notice these symptoms:
- size bloat of partial indexes
- very bad planning estimates
I'd appreciate any hint to get a better
Tom Lane ha scritto:
Daniele Varrazzo <[EMAIL PROTECTED]> writes:
In my problem I had 2 tables: a small one (accounts), a large one (foo). The
way the query is written doesn't allow the stats from the large table to be
used at all, unless the records from the small table are fetch
Francisco Reyes writes:
Daniele Varrazzo writes:
I suspect the foo.account_id statistical data are not used at all in
query: the query planner can only estimate the number of accounts to
look for, not
You mentioned you bumped your default_statistics_target.
What did you increase it to?
My
>>>> Daniele Varrazzo <[EMAIL PROTECTED]> wrote:
>
>> select count(*) from foo
>> where foo.account_id in (
>> select id from accounts where system = 'abc');
>
>> Total runtime: 13412.226 ms
>
> Out of curiosity,
Tom Lane ha scritto:
Daniele Varrazzo <[EMAIL PROTECTED]> writes:
There is an index in the field "foo.account_id" but is not used. The resulting
query plan is:
Aggregate (cost=300940.70..300940.71 rows=1 width=0) (actual
time=13412.088..13412.089 rows=1 loops=1)
, who tested it on Mac OS, also noticed that PG
behaved correctly with the freshly imported data: as soon as he VACUUMed the
database he started experiencing the described issue.
Thank you very much.
--
Daniele Varrazzo - Develer S.r.l.
http://www.develer.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
12 matches
Mail list logo