Re: [PERFORM] Many-to-many performance problem

2016-06-16 Thread Rowan Seymour
When you create an Postgres RDS instance, it's comes with a "default.postgres9.3" parameter group which contains substitutions based on the server size. The defaults for the memory related settings are: effective_cache_size = {DBInstanceClassMemory/16384} maintenance_work_mem = GREATEST({DBInstanc

Re: [PERFORM] Many-to-many performance problem

2016-06-10 Thread Alex Ignatov
On 10.06.2016 16:04, Rowan Seymour wrote: In our Django app we have messages (currently about 7 million in table msgs_message) and labels (about 300), and a join table to associate messages with labels (about 500,000 in msgs_message_labels). Not sure you'll need them, but here are the relevant

Re: [PERFORM] Many-to-many performance problem

2016-06-10 Thread Tom Lane
Yves Dorfsman writes: > On 2016-06-10 08:13, Tom Lane wrote: >> It looks like everything is fine as long as all the data the query needs >> is already in PG's shared buffers. As soon as it has to go to disk, >> you're hurting, because disk reads seem to be taking ~10ms on average. >

Re: [PERFORM] Many-to-many performance problem

2016-06-10 Thread Yves Dorfsman
I thought this was a really interesting case, and would love to learn from it, please bare with me if my questions are naive. On 2016-06-10 08:13, Tom Lane wrote: > Rowan Seymour writes: >> Most of time, this query performs like https://explain.depesz.com/s/ksOC >> (~15ms). It's no longer using

Re: [PERFORM] Many-to-many performance problem

2016-06-10 Thread Tom Lane
Rowan Seymour writes: > Most of time, this query performs like https://explain.depesz.com/s/ksOC > (~15ms). It's no longer using the using the msgs_inbox index, but it's > plenty fast. However, sometimes it performs like > https://explain.depesz.com/s/81c (67000ms) > And if you run it again, it'll

[PERFORM] Many-to-many performance problem

2016-06-10 Thread Rowan Seymour
In our Django app we have messages (currently about 7 million in table msgs_message) and labels (about 300), and a join table to associate messages with labels (about 500,000 in msgs_message_labels). Not sure you'll need them, but here are the relevant table schemas: CREATE TABLE msgs_message (

Re: [PERFORM] many to many performance

2008-11-27 Thread Chavdar Kopoev
tion, and is there any way to achive closer to optimal performance using postgresql functionality and extensibility? Chavdar Kopoev -Original Message- From: Craig Ringer [mailto:[EMAIL PROTECTED] Sent: 2008-11-26, 19:40:47 To: Chavdar Kopoev [mailto:[EMAIL PROTECTED] Subject: Re: [PERFORM]

[PERFORM] many to many performance

2008-11-26 Thread Chavdar Kopoev
Hello, I have following common situation: Category IDs: about 50 000 Document IDs: about 3 000 000 Many to many relationship. A document id have a relation with 10 up to 1000 category ids One query, with input set of document ids, resulting set of category ids, having relation with input ids. (v

Re: [PERFORM] many to many performance

2008-11-26 Thread Craig Ringer
Chavdar Kopoev wrote: > I want to use as a data storage postgresql. Tried several data structures, > testing btree, gin, gist indecies over them, but best achieved performance > for a 10 times smaller dataset (10k cat ids, 100k doc ids, 1m relations) is > slower more than 5 times. Can you post

[PERFORM] many to many performance

2008-11-26 Thread Chavdar Kopoev
Hello, I have following common situation: Category IDs: about 50 000 Document IDs: about 3 000 000 Many to many relationship. A document id have a relation with 10 up to 1000 category ids One query, with input set of document ids, resulting set of category ids, having relation with input ids. (v