Bryce Nesbitt writes:
> The query plans are now attached (sorry I did not start there: many
> lists reject attachments). Or you can click on "text" at the query
> planner analysis site http://explain.depesz.com/s/qYq
At least some of the problem is the terrible quality of the rowcount
estimates i
On Wed, Feb 10, 2010 at 9:43 AM, Justin Graf wrote:
> Keep in mind each connection/client that connecting to the server
> creates a new process on the server. Each one the settings you list
> above is the max amount of memory each one of those sessions is allowed
> to consume.
It's even worse fo
On Monday 08 February 2010 05:53:23 Robert Haas wrote:
> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
>
> wrote:
> > Andres Freund escribió:
> >> I personally think the fsync on the directory should be added to the
> >> stable branches - other opinions?
> >> If wanted I can prepare patches for
da...@lang.hm wrote:
most of my new hardware has no problems with the old kernels as well,
but once in a while I run into something that doesn't work.
Quick survey just of what's within 20 feet of me:
-Primary desktop: 2 years old, requires 2.6.23 or later for SATA to work
-Server: 3 years ol
If you guys succeed in making this class of query perform, you'll have
beat out the professional consulting firm we hired, which was all but
useless! The query is usually slow, but particular combinations of
words seem to make it obscenely slow.
The query plans are now attached (sorry I did not s
On Wed, 10 Feb 2010, Greg Smith wrote:
Scott Marlowe wrote:
I'd love to see someone do a comparison of early to mid 2.6 kernels (2.6.18
like RHEL5) to very
up to date 2.6 kernels. On fast hardware.
I'd be happy just to find fast hardware that works on every kernel from the
RHEL5 2.6.18 up
Scott Marlowe wrote:
I'd love to see someone do a comparison of early to mid 2.6 kernels (2.6.18
like RHEL5) to very
up to date 2.6 kernels. On fast hardware.
I'd be happy just to find fast hardware that works on every kernel from
the RHEL5 2.6.18 up to the latest one without issues.
--
Gr
rama wrote:
in that way, when i need to do a query for a long ranges (ie: 1 year) i just
take the rows that are contained to contab_y
if i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i can do some cool intersection between
the
Actually, in a way it does "No space left on device" or similar ;-)
Cheers
Dave
P.S. for those not familiar with Oracle, ORA-01555 translates to "your query
/ transaction is kinda old and I've forgotten the data, so I'm just going to
throw an error at you now". If you're reading, your SELECT
On Wed, Feb 10, 2010 at 4:48 PM, Dave Crooke wrote:
> On Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison wrote:
>
>>
>>
>> Just a nit, but Oracle implements MVCC. 90% of the databases out there
>> do.
>>
>
> Sorry, I spoke imprecisely. What I meant was the difference in how the rows
> are stored inte
On Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison wrote:
>
>
> Just a nit, but Oracle implements MVCC. 90% of the databases out there do.
>
Sorry, I spoke imprecisely. What I meant was the difference in how the rows
are stored internally in Oracle, the main tablespace contains only the
newest v
On Wed, Feb 10, 2010 at 4:16 PM, Dave Crooke wrote:
> Hi Rama
>
> I'm actually looking at going in the other direction
>
> I have an app using PG where we have a single table where we just added a
> lot of data, and I'm ending up with many millions of rows, and I'm finding
> that the single
That sure looks like the source of the problem to me too. I've seen similar
behavior in queries not very different from that. It's hard to guess what the
problem is exactly without having more knowledge of the data distribution in
article_words though.
Given the results of analyze, I'd try to
Hi Rama
I'm actually looking at going in the other direction
I have an app using PG where we have a single table where we just added a
lot of data, and I'm ending up with many millions of rows, and I'm finding
that the single table schema simply doesn't scale.
In PG, the table partitioning
On 2/10/2010 5:13 PM, rama wrote:
> in that way, when i need to do a query for a long ranges (ie: 1 year) i just
> take the rows that are contained to contab_y
> if i need to got a query for a couple of days, i can go on ymd, if i need to
> get some data for the other timeframe, i can do some co
Hi all,
i am trying to move my app from M$sql to PGsql, but i need a bit of help :)
on M$sql, i had certain tables that was made as follow (sorry pseudo code)
contab_y
date
amt
uid
contab_yd
date
amt
uid
contab_ymd
date
amt
uid
and so on..
this was used to "solidif
On Wed, Feb 10, 2010 at 3:29 AM, Bryce Nesbitt wrote:
> Or, if you want to actually read that query plan, try:
> http://explain.depesz.com/s/qYq
Much better, though I prefer a text attachment... anyhow, I think the
root of the problem may be that both of the subquery scans under the
append node
On Feb 9, 2010, at 10:37 PM, Greg Smith wrote:
> Jeff wrote:
>> I'd done some testing a while ago on the schedulers and at the time
>> deadline or noop smashed cfq. Now, it is 100% possible since then
>> that they've made vast improvements to cfq and or the VM to get better
>> or similar perf
On Wednesday 10 February 2010 15:56:40 Tom Lane wrote:
> Franck Routier writes:
> > I am wondering if deferring foreign key constraints (instead of
> > disableing them) would increase performance, compared to non deferred
> > constraints
>
> No, it wouldn't make any noticeable difference AFAICS.
On Feb 10, 2010, at 1:37 AM, Greg Smith wrote:
Jeff wrote:
I'd done some testing a while ago on the schedulers and at the time
deadline or noop smashed cfq. Now, it is 100% possible since then
that they've made vast improvements to cfq and or the VM to get
better or similar performance.
On 2/10/2010 12:10 AM, Anton Maksimenkov wrote:
> Can anybody briefly explain me how each postgres process allocate
> memory for it needs?
> I mean, what is the biggest size of malloc() it may want? How many
> such chunks? What is the average size of allocations?
>
> I think that at first it alloca
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> > Could some of you please share some info on such scenarios- where
> > you are supporting/designing/developing databases that run into at
> > least a few hundred GBs of data (I know, that is small by todays'
> > standards)?
Just saw this, so
Jayadevan M wrote:
> Could some of you please share some info on such scenarios- where
> you are supporting/designing/developing databases that run into at
> least a few hundred GBs of data (I know, that is small by todays'
> standards)?
I'm a database administrator for the Wisconsin Courts.
El 10/02/2010 6:49, Scott Marlowe escribió:
Quick note, please stick to text formatted email for the mailing list,
it's the preferred format.
On Tue, Feb 9, 2010 at 9:09 PM, Jayadevan M
wrote:
Hello all,
Apologies for the long mail.
I work for a company that is provides solutions mostly o
Franck Routier writes:
> I am wondering if deferring foreign key constraints (instead of
> disableing them) would increase performance, compared to non deferred
> constraints
No, it wouldn't make any noticeable difference AFAICS. It would
postpone the work from end-of-statement to end-of-transac
Hi,
I am trying to improve delete performance on a database with several
foreign keys between relations that have 100M or so rows.
Until now, I have manually disabled the triggers, done the delete, and
re-enabled the triggers.
This works, but I have to do that when I am sure no other user will
a
2010/2/10 Bryce Nesbitt :
> Or, if you want to actually read that query plan, try:
> http://explain.depesz.com/s/qYq
>
hello,
check your work_mem sesttings. Hash join is very slow in your case.
Pavel
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make c
Or, if you want to actually read that query plan, try:
http://explain.depesz.com/s/qYq
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
28 matches
Mail list logo