Hi,
I have a big function that includes many truncates on different tables. In
the documentation is is written that truncates creates a new file and
resign the old filenode to the new filenode and the old file (old data of
the table) is deleted in commit.
In order to execute my function I run psql
Mariel Cherkassky wrote:
> Hi,
> I have a big function that includes many truncates on different tables.
> In the documentation is is written that truncates creates a new file
> and resign the old filenode to the new filenode and the old file
> (old data of the table) is deleted in commit.
>
> In
Hi, I think something changed recently in my development environment as
I don't recall deletes being so slow before.
I've created a new dump and restored to a new database, ran VACUUM FULL
ANALYSE and a simple delete takes forever as you can see here:
explain analyze delete from field_values
Rodrigo Rosenfeld Rosas writes:
> Hi, I think something changed recently in my development environment as
> I don't recall deletes being so slow before.
> I've created a new dump and restored to a new database, ran VACUUM FULL
> ANALYSE and a simple delete takes forever as you can see here:
The
Em 05-12-2017 14:27, Tom Lane escreveu:
Rodrigo Rosenfeld Rosas writes:
Hi, I think something changed recently in my development environment as
I don't recall deletes being so slow before.
I've created a new dump and restored to a new database, ran VACUUM FULL
ANALYSE and a simple delete takes
Rodrigo Rosenfeld Rosas wrote:
> explain analyze delete from field_values where transaction_id=226;
> QUERY PLAN
> ---
> Delete on field_valu
Em 05-12-2017 14:43, Alvaro Herrera escreveu:
Rodrigo Rosenfeld Rosas wrote:
explain analyze delete from field_values where transaction_id=226;
QUERY PLAN
Alvaro Herrera writes:
> Rodrigo Rosenfeld Rosas wrote:
>> explain analyze delete from field_values where transaction_id=226;
>> QUERY PLAN
>> ---
Em 05-12-2017 15:25, Tom Lane escreveu:
Alvaro Herrera writes:
Rodrigo Rosenfeld Rosas wrote:
explain analyze delete from field_values where transaction_id=226;
QUERY PLAN
--
Rodrigo Rosenfeld Rosas wrote:
> Em 05-12-2017 15:25, Tom Lane escreveu:
> > > Normally this is because you lack indexes on the referencing columns, so
> > > the query that scans the table to find the referencing rows is a
> > > seqscan.
> > Actually though ... the weird thing about this is that I
I get very different plan chosen when my query is in a lateral subquery vs
standalone -- it doesn't use a key when joining on a table, instead opting
to do a hash join. Here is the query:
select distinct on (sub.entity_id, sub.note_id, sub.series_id)
entity_id, note_id, series_id
from
(
sel
Weird, when I deleted an erroneous index it started picking a reasonable
plan. This now works as expected, for posterity here is the bad plan:
Nested Loop (cost=21281.50..21323812.82 rows=5621000 width=47) (actual
time=171.648..7233.298 rows=85615 loops=1)
-> Function Scan on generate_serie
Em 05-12-2017 15:49, Alvaro Herrera escreveu:
Rodrigo Rosenfeld Rosas wrote:
Em 05-12-2017 15:25, Tom Lane escreveu:
Normally this is because you lack indexes on the referencing columns, so
the query that scans the table to find the referencing rows is a
seqscan.
Actually though ... the weird
Argh, so sorry for repeated posts; I'll be very careful to review them
before posting. The "good plan" was the result of me hard coding '2017-03-14
20:59:59.999+00'::timestamp of using dates.date inside the lateral
subquery. When I correctly use dates.date, it takes 7000ms instead of
0.3ms. My ques
Em 05-12-2017 16:15, Rodrigo Rosenfeld Rosas escreveu:
Em 05-12-2017 15:49, Alvaro Herrera escreveu:
Rodrigo Rosenfeld Rosas wrote:
Em 05-12-2017 15:25, Tom Lane escreveu:
Normally this is because you lack indexes on the referencing
columns, so
the query that scans the table to find the refer
Jeff Janes writes:
> On Dec 3, 2017 15:31, "Tom Lane" wrote:
>> Jeff Janes writes:
>>> But I do see that ties within the logical order of the column values are
>>> broken to agree with the physical order. That is wrong, right? Is there
>>> any argument that this is desirable?
>> Uh ... what d
Why not store metadata in pg and the payload in S3?
On Mon, Nov 27, 2017 at 11:58 AM Jean Baro wrote:
> Hi there,
>
> We are creating a new DB which will behave most like a file system, I
> mean, there will be no complex queries or joins running in the DB. The idea
> is to grab the WHOLE set of
On Nov 21, 2017, at 12:05 AM, Matthew Hall wrote:
>> Do you really need the artificial primary key, when you already have another
>> column that would be used as the primary key? If you need to use this it a
>> foreign key in another type, then very well might. But maintaining two
>> unique i
>Buffers: shared hit=544 read=6760 dirtied=4034
>I/O Timings: read=69709.611
You has very slow (or busy) disks, not postgresql issue. Reading 6760 * 8KB in
70 seconds is very bad result.
For better performance you need better disks, at least raid10 (not raid5). Much
more memory in shared
19 matches
Mail list logo