hi
Thanks again very much for these clear-cut answers
I think i'll try to implement the partitionning despite all the
difficulties you raise about it in this thread
because i can't find any viable solution right now for this situation.
It will constrain me to change the datamodel to workaround
On 03/15/2011 05:10 AM, Samba GUEYE wrote:
1. Why "... partitionning is not a good idea ..." like you said
Robert and Conor "... I grant that it would be better to never need
to do that" ?
There are a number of difficulties the planner has with partitioned
tables. Only until very recently, MA
Alright thanks all of you for your answers, but i've got 3 more questions :
1. Why "... partitionning is not a good idea ..." like you said
Robert and Conor "... I grant that it would be better to never
need to do that" ?
2. Is there another way or strategy to deal with very large t
On Mon, Mar 14, 2011 at 12:40 PM, Robert Haas wrote:
> Generally, table partitioning is not a good idea unless you are
> dealing with really large tables, and nearly all of your queries apply
> only to a single partition. Most likely you are better off not using
> table inheritance in the first p
On Mon, Mar 14, 2011 at 12:42 PM, Samba GUEYE wrote:
> Yeah but is there a workaround to force the root table to propagate the
> foreign key to the partitionned table
> because right now all foreign keys to partitionned table throws constraints
> violation and it's a big problem for me
No. Gener
Yeah but is there a workaround to force the root table to propagate the
foreign key to the partitionned table
because right now all foreign keys to partitionned table throws
constraints violation and it's a big problem for me
Le 11/03/2011 19:31, Robert Haas a écrit :
On Thu, Mar 10, 2011 at 3:
On Thu, Mar 10, 2011 at 3:25 AM, Samba GUEYE wrote:
> The measure insertion is successful but problems raise up when inserting in
> the simpleMeasure table because it can't find the foreign key inserted the
> measure table and do not look at the partitionned tables
Yes, that's how it works.
--
Hi jim thanks for your answer,
The database model is some' like that :
Measure(Id, numbering,Date, crcCorrect, sensorId) and a SimpleMeasure
(Id, doubleValue) and GenericMeasure (Id, BlobValue, numberOfElements)
and in the UML model SimpleMeasure and GenericMeasure inherits from the
Measure cla
On Mar 8, 2011, at 9:45 AM, Samba GUEYE wrote:
> I have a problem with table partitioning because i have a foreign key applied
> on the partionned table and it throw a constraint violation error during
> inserts.
> I saw on the manual
> (http://www.postgresql.org/docs/8.4/interactive/ddl-inherit
Hello,
I have a problem with table partitioning because i have a foreign key
applied on the partionned table and it throw a constraint violation
error during inserts.
I saw on the manual
(http://www.postgresql.org/docs/8.4/interactive/ddl-inherit.html caveats
section) that it's a limitation du
On 05/03/2011 09:37, Tobias Brox wrote:
Sorry for not responding directly to your question and for changing
the subject ... ;-)
On 4 March 2011 18:18, Landreville wrote:
That is partitioned into about 3000 tables by the switchport_id (FK to
a lookup table), each table has about 30 000 rows cur
On 5 March 2011 12:59, Mark Thornton wrote:
> If your partitions a loosely time based and you don't want to discard old
> data, then surely the number of partitions will grow without limit.
True, but is it relevant? With monthly table partitioning it takes
hundreds of years before having "thousa
On 05/03/2011 09:37, Tobias Brox wrote:
Sorry for not responding directly to your question and for changing
the subject ... ;-)
On 4 March 2011 18:18, Landreville wrote:
That is partitioned into about 3000 tables by the switchport_id (FK to
a lookup table), each table has about 30 000 rows cur
Sorry for not responding directly to your question and for changing
the subject ... ;-)
On 4 March 2011 18:18, Landreville wrote:
> That is partitioned into about 3000 tables by the switchport_id (FK to
> a lookup table), each table has about 30 000 rows currently (a row is
> inserted every 5 min
Tobias Brox wrote:
I did test "select id from table order by id desc limit 1" on my parent table
yesterday, it would still do the seq-scan. Even adding a
where-restriction to make sure only one partition was queried I still
got the seq-scan.
Right; you actually have to direct the query towa
[Greg Smith]
> Here's the comment from that describing the main technique used to fix it:
>
> "This module tries to replace MIN/MAX aggregate functions by subqueries of
> the form
>
> (SELECT col FROM tab WHERE ... ORDER BY col ASC/DESC LIMIT 1)
Huh ... that sounds a bit like pg 8.0 to me ;-) I r
Tobias Brox wrote:
I implemented table partitioning, and it caused havoc with a "select
max(id)" on the parent table - the query plan has changed from a
lightningly fast backwards index scan to a deadly seq scan.
This problem was fixed in the upcoming 9.1:
http://archives.postgresql.org/pgs
This is a known limitation of partitioning. One solution is to use a
recursive stored proc, which can use indexes. Such a solution is
discussed here:
http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php
Regards,
Ken
http://archives.postgresql.org/pgsql-performance/2009-09/msg00
I implemented table partitioning, and it caused havoc with a "select
max(id)" on the parent table - the query plan has changed from a
lightningly fast backwards index scan to a deadly seq scan. Both
partitions are set up with primary key index and draws new IDs from
the same sequence ... "select m
Tom --
Thanks for the pointers and advice. We've started by doubling
max_locks and halving shared_buffers, we'll see how it goes.
Brian
On Oct 10, 2009, at 7:56 PM, Tom Lane wrote:
Brian Karlak writes:
"out of shared memory HINT: You might need to increase
max_locks_per_transa
Brian Karlak writes:
> "out of shared memory HINT: You might need to increase
> max_locks_per_transaction"
You want to do what it says ...
> 1) We've already tuned postgres to use ~2BG of shared memory -- which
> is SHMAX for our kernel. If I try to increase
> max_locks_per_transac
Hello All --
I have implemented table partitioning in order to increase performance
in my database-backed queuing system. My queue is partitioned by
job_id into separate tables that all inherit from a base "queue" table.
Things were working swimmingly until my system started managing
tho
Based on a suggestion on the postgis list, I partitioned my 80 million (for
now) record table into
subtables of about 230k records (the amount of data collected in five
minutes). At the moment
I have 350 subtables.
Everything seems to be great…COPY time is ok, building a
geometric in
On 4/26/05, Mohan, Ross <[EMAIL PROTECTED]> wrote:
> Maybe he needs to spend $7K on performance improvements?
>
> ;-)
>
AAARRRGGG!
I will forever hate the number 7,000 from this day forth!
Seriously, though, I've never seen a thread on any list wander on so
aiml
: [PERFORM] Table Partitioning: Will it be supported in Future?
Richard,
> I believe these are being worked on at the moment. You might want to
> search the archives of the hackers mailing list to see if the plans
> will suit your needs.
Actually, this is being discussed through the Bizgre
Richard,
> I believe these are being worked on at the moment. You might want to
> search the archives of the hackers mailing list to see if the plans will
> suit your needs.
Actually, this is being discussed through the Bizgres project:
www.bizgres.org.
However, I agree that a 1GB table is not
[EMAIL PROTECTED] wrote:
Hmm,
I have asked some Peoples on the List an some one has posted this links
http://archives.postgresql.org/pgsql-performance/2004-12/msg00101.php
It is quite usefull to read but iam not sure thadt theese Trick is verry
helpfull.
I want to splitt my 1GByte Table into some
Hmm,
I have asked some Peoples on the List an some one has posted this links
http://archives.postgresql.org/pgsql-performance/2004-12/msg00101.php
It is quite usefull to read but iam not sure thadt theese Trick is verry
helpfull.
I want to splitt my 1GByte Table into some little Partitions but ho
[EMAIL PROTECTED] wrote:
Hi all,
Ia a Guy from Germany an a strong Postgres believer!
It is the best OpenSource Database i have ever have bee tasted and i
try to using
it in any Database Environments.
It is exiting to see thadt Verison 8.0 has Tablespaces like ORACLE and
DB/2,
but i need Partit
Hi all,
Ia a Guy from Germany an a strong Postgres believer!
It is the best OpenSource Database i have ever have bee tasted and i
try to using
it in any Database Environments.
It is exiting to see thadt Verison 8.0 has Tablespaces like ORACLE and DB/2,
but i need Partitioning on a few very large
30 matches
Mail list logo