On Fri, 2 Oct 2009, Scott Marlowe wrote:
On Fri, Oct 2, 2009 at 12:47 AM, alpesh gajbe wrote:
We have a Proliant DL585 G5 with 16 cores and 32 GB Ram in the terms of
processors we found that buying amd makes much more sense because in the
same price we could put more processors
on the machine
On Thu, Oct 22, 2009 at 6:22 AM, Kenneth Marshall wrote:
> On Wed, Oct 21, 2009 at 03:51:25PM -0700, Doug Cole wrote:
>> I have a reporting query that is taking nearly all of it's time in aggregate
>> functions and I'm trying to figure out how to optimize it. The query takes
>> approximately 170m
Andreas Hartmann wrote on 29.10.2009 21:52:
Hi everyone,
I want to model the following scenario for an online marketing application:
Users can create mailings. The list of recipients can be uploaded as
spreadsheets with arbitrary columns (each row is a recipient). I expect
the following maxim
Hi everyone,
I want to model the following scenario for an online marketing application:
Users can create mailings. The list of recipients can be uploaded as
spreadsheets with arbitrary columns (each row is a recipient). I expect
the following maximum quantities the DB will contain:
* up to
On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The t
On 10/23/09 8:38 AM, "Michal J.Kubski" wrote:
>
>
>
>
> Hi,
>
>
>
> Is there any way to get the query plan of the query run in the stored
>
> procedure?
>
> I am running the following one and it takes 10 minutes in the procedure
>
> when it is pretty fast standalone.
>
>
>
> Any idea
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million recor
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million recor
On Thu, 2009-10-29 at 17:00 +0100, Ludwik Dylag wrote:
> 2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple
> table
> with six columns and two indices, and about 6 million records
> are
> w
Hi Peter,
Sounds like you're experiencing index bloat and vacuums do nothing to
help that. You can do one of 2 thing to remedy this:
1) The fastest and simplest (but most disruptive) way is to use REINDEX.
But this will exclusively lock the table while rebuilding the indexes:
REINDEX TABLE p
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The table stores approximately 12
On Thu, 29 Oct 2009, Josh Rovero wrote:
Do you ever "vacuum full" to reclaim empty record space?
Unless you expect the size of the database to permanently decrease by a
significant amount, that is a waste of time, and may cause bloat in
indexes. In this case, since the space will be used agai
On Thu, 2009-10-29 at 15:44 +0100, Peter Meszaros wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The
I had posted this on another thread..but did not get a response..here
it is again
explain analyze select thedate,sent.watch as wat, nod.point as fwl,
act.acttype, intf.pointofcontact, func.getNum(snum) as sss,
func.getNum(dnum) as ddd, dddport,
aaa.aaacol,szone.ssszn as ssszone, dzone.dddzn as
dd
On Thu, Oct 29, 2009 at 10:10 AM, Anj Adu wrote:
> Join did not help. A sequential scan is still being done. The
> hardcoded value in the IN clause performs the best. The time
> difference is more than an order of magnitude.
If you want help debugging a performance problem, you need to post
your
I would recomend increasing fsm max_fsm_pages and shared_buffers
This changes did speed up vacuum full on my database.
With shared_buffers remember to increase max shm in your OS.
Ludwik
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
Try replacing the 'current_timestamp - interval 8 days' portion with explicit
values (e.g. partitioned_column < '2009-10-21'::date ) and see if that works.
I think the query planner can only use explicit values to determine if it
should go straight to partitioned tables.
Bob
--- On Thu, 10/29
Jesper Krogh writes:
> I'm currently trying to figure out why the tsearch performance seems to
> vary a lot between different queryplans. I have created a sample dataset
> that sort of resembles the data I have to work on.
> The script that builds the dataset is at:
> http://krogh.cc/~jesper/buil
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million records, because a
cron job d
On Mon, 26 Oct 2009 14:09:49 -0400, Tom Lane wrote:
> "Michal J. Kubski" writes:
>> [ function that creates a bunch of temporary tables and immediately
>> joins them ]
>
> It'd probably be a good idea to insert an ANALYZE on the temp tables
> after you fill them. The way you've got this set up
Hello Greg,
Greg Smith a écrit :
> On Wed, 28 Oct 2009, Denis BUCHER wrote:
>
>> For now, we only planned a VACUUM ANALYSE eacha night.
>
> You really want to be on a later release than 8.1 for an app that is
> heavily deleting things every day. The answer to most VACUUM problems
> is "VACUUM m
On Mon, 26 Oct 2009 11:52:22 -0400, Merlin Moncure
wrote:
>> Do you not have an index on last_snapshot.domain_id?
>>
> that, and also try rewriting a query as JOIN. There might be
> difference in performance/plan.
>
Thanks, it runs better (average 240s, not 700s) with t
Join did not help. A sequential scan is still being done. The
hardcoded value in the IN clause performs the best. The time
difference is more than an order of magnitude.
2009/10/29 Angayarkanni :
>
> 2009/10/29 Grzegorz Jaśkiewicz
>>
>>
>> On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu wrote:
>>>
>>>
2009/10/29 Grzegorz Jaśkiewicz
>
>
> On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu wrote:
>
>> Postgres consistently does a sequential scan on the child partitions
>> for this query
>>
>> select * from partitioned_table
>> where partitioned_column > current_timestamp - interval 8 days
>> where x in
On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu wrote:
> Postgres consistently does a sequential scan on the child partitions
> for this query
>
> select * from partitioned_table
> where partitioned_column > current_timestamp - interval 8 days
> where x in (select yy from z where colname like 'aaa%')
>
25 matches
Mail list logo