On Wed, May 6, 2015 at 1:56 AM, Mitu Verma wrote:
> Thank you so much all of you.
>
> Table audittraillogentry have PRIMARY KEY and FOREIGN KEY defined, below is
> the detail of existing table audittraillogentry.
>
> As you can see ., it is referenced by 2 tables , "cdrdetails" and
> "cdrlogentr
"cdrlogentry" CONSTRAINT "cdrlogentry_audittableindex_fkey" FOREIGN
KEY (audittableindex) REFERENCES audittraillogentry(tableindex)
Has OIDs: no
Tablespace: "mmdata"
-Original Message-----
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: May 03, 2015 9:43
> Now issue is that when this script for the deletion of data is
launched , it is taking more than 7 days and doing nothing i.e not a
single row has been deleted.
Deleting a large number of rows can take a long time. Often it's
quicker to delete smaller chunks. The LIMIT clause is not suppor
This delete runs in a single transaction. That means the entire transaction
has to complete before you will see anything deleted. Interrupting the
transaction simply rolls it back, so nothing is deleted.
Tom already pointed out the potential foreign key slowdown, another slowdown
may simply be
On Saturday, May 2, 2015, Mitu Verma wrote:
>
> still this delete operation is not working and not a single row has been
> deleted from the table.
>
>
Because of MVCC other sessions are not able to see partial deletions...and
as you aluded to knowing the data itself is not actually removed by a
d
Mitu Verma writes:
> 1. If postgreSQL has some limitations for deletion of large data?
Not as such, but you've not given us any details that would permit
comment.
A reasonably likely bet is that this table is referenced by a foreign key
in some other table, and that other table has no index on t
Hi,
I am facing an issue with the deletion of huge data.
We have a cronscript which is used to delete the data of last 3 months from one
of the tables.
Data in the table is large (8872597 as you can see the count below) since it is
from last 3 months.
fm_db_Server3=# select count(*) from auditt