Hi,
I intend to create 2 partitions called New and Old of a table. An automatic
job will check if a record is older than 90 days from current time (based
on a field), it will move the record from New to Old.
The problem is that I cannot control which partition a select statement
with range criter
Luiz:
1st thing, do not top-quote. It's hard to read and I, personally,
consider it insulting ( not the first time it's done, and for obvious
reasons ).
On Fri, Sep 15, 2017 at 4:24 PM, Luiz Hugo Ronqui wrote:
> Our usage allows us to insert all rows into the hot partition, since its a
> rare e
@postgresql.org
[mailto:pgsql-general-owner+m233282=lronqui=tce.sp.gov...@postgresql.org] Em
nome de Francisco Olarte
Enviada em: sexta-feira, 15 de setembro de 2017 08:37
Para: Luiz Hugo Ronqui
Cc: pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Table partition - parent table use
Hi Luiz:
On
Hi Luiz:
On Thu, Sep 14, 2017 at 11:06 PM, Luiz Hugo Ronqui
wrote:
...
> We have a database with data being inserted for almost 10 years and no
> policy defined to get rid of old records, even though we mostly use only the
> current and last year's data. Some etl processes run on older data from
Hello everybody!
I have searched quite a bit, but haven't found a setup like the ours, so I
decided to ask if I'm missing something:
We have a database with data being inserted for almost 10 years and no
policy defined to get rid of old records, even though we mostly use only the
current and last
On Sun, Jun 13, 2010 at 9:14 AM, AI Rumman wrote:
> For how many records I should go for a table partition instead of using just
> index?
> Any idea please.
I concur with Stephen. We tend to split our tables when they exceed
100 million rows *if* they are experiencing performance issues, and
tar
* AI Rumman (rumman...@gmail.com) wrote:
> For how many records I should go for a table partition instead of using just
> index?
> Any idea please.
General rule of thumb is that you don't need partitioning until you're
into the 100's of millions of records.
Stephen
signature.asc
Descrip
For how many records I should go for a table partition instead of using just
index?
Any idea please.
On Fri, 2005-04-22 at 05:51, Dinesh Pandey wrote:
> How to create this table (with partition) in Postgres.
>
>
>
> --ORACLE
>
> CREATE TABLE A (col1 NUMBER NOT NULL,
>
>col2 DATENOT NULL,
>
>col3
How to create this table (with
partition) in Postgres.
--ORACLE
CREATE TABLE A (col1 NUMBER NOT NULL,
col2 DATE NOT
NULL,
col3 VARCHAR2(500)
)
PARTITION BY RANGE (col2)
> > create index prdt_new_url_dx on prdt_new (url)
> > create index prdt_new_sku_dx on prdt_new (sku)
> > create index prdt_old_sku_dx on prdt_old (sku)
> > create index prdt_new_url_null_dx on prdt_new (url) where prdt_new.url
> > IS NULL
I added indexes & redo the analyze - Query plan looks bett
On Mon, 2005-03-28 at 16:02, Scott Marlowe wrote:
> On Mon, 2005-03-28 at 15:38, Yudie Pg wrote:
> > > Also, this is important, have you anayzed the table? I'm guessing no,
> > > since the estimates are 1,000 rows, but the has join is getting a little
> > > bit more than that. :)
> > >
> > > Ana
On Mon, 2005-03-28 at 15:38, Yudie Pg wrote:
> > Also, this is important, have you anayzed the table? I'm guessing no,
> > since the estimates are 1,000 rows, but the has join is getting a little
> > bit more than that. :)
> >
> > Analyze your database and then run the query again.
>
> I analyz
> Also, this is important, have you anayzed the table? I'm guessing no,
> since the estimates are 1,000 rows, but the has join is getting a little
> bit more than that. :)
>
> Analyze your database and then run the query again.
I analyze the table and it decrease number of rows in nested loop o
Looks like you need to create some indexes, probably on (groupnum) and
possibly on (groupnum,sku) on both tables.
Hope this helps,
On Mon, Mar 28, 2005 at 01:50:06PM -0600, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain tell
On Mon, 2005-03-28 at 13:50, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain tell you? Have you analyzed the database?
>
>
> This is the QUERY PLAN
> Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
> Hash Cond
> Hold on, let's diagnose the real problem before we look for solutions.
> What does explain tell you? Have you analyzed the database?
This is the QUERY PLAN
Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
Hash Cond: (("outer".groupnum = "inner".groupnum) AND
(("outer".sku)::tex
On Mon, 2005-03-28 at 13:02, Yudie Gunawan wrote:
> I actualy need to join from 2 tables. Both of them similar and has
> more than 4 millions records.
>
> CREATE TABLE prdt_old (
> groupnum int4 NOT NULL,
> sku varchar(30) NOT NULL,
> url varchar(150),
> );
>
> CREATE TABLE prdt_new(
> groupn
I actualy need to join from 2 tables. Both of them similar and has
more than 4 millions records.
CREATE TABLE prdt_old (
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150),
);
CREATE TABLE prdt_new(
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150) NOT NULL,
On Mon, 2005-03-28 at 11:32 -0600, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do t
On Mon, Mar 28, 2005 at 11:32:04AM -0600, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
What's the query and how are you issuing it? Where are you seeing
the error? This could be a client problem: the client
On Mon, 2005-03-28 at 11:32, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do to deal
I have table with more than 4 millions records and when I do select
query it gives me "out of memory" error.
Does postgres has feature like table partition to handle table with
very large records.
Just wondering what do you guys do to deal with very large table?
Thanks!
-
23 matches
Mail list logo