On Tue, 21 May 2019 at 14:04, Walter Smith wrote:
> I'm so sorry -- I meant to give the version, of course. It's 9.6.13.
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ca930fc3
has been applied since then.
It would be good if you could confirm the problem is resolved after a
vac
Tom Lane writes:
>I'm assuming your problem
>query involved a join on the indexed column --- whether or not the
>final plan did a mergejoin, the planner would consider this
There's no join -- the query is
SELECT "notifications".*
FROM "notifications"
WHERE "notifications"."person_id" = ? AND
I'm so sorry -- I meant to give the version, of course. It's 9.6.13.
Thanks,
Walter
On Mon, May 20, 2019 at 6:05 PM Tom Lane wrote:
> Walter Smith writes:
> > Today we deleted about 15 million rows in one transaction from this
> table.
> > Immediately afterwards, a particular SELECT started r
On Tue, 21 May 2019 at 08:23, Jeremy Altavilla
wrote:
>
> Thanks for the suggestion. I created extended statistics objects for the two
> tables in question. Unfortunately the resulting plan was the same (and had
> the same estimates). It looks like the extended stats discovered a
> potentially
Walter Smith writes:
> Today we deleted about 15 million rows in one transaction from this table.
> Immediately afterwards, a particular SELECT started running very slowly --
> 500 to 3000 ms rather than the usual <1ms.
> We did an EXPLAIN ANALYZE on this select and it was still doing an index
>
On Tue, 21 May 2019 at 12:44, Walter Smith wrote:
>
> We had a mysterious (to us) slowdown today that I'm hoping someone can
> explain just based on PG's principles of operation. It got better by itself
> so it seems like it was "normal" behavior -- I just don't know what behavior
> it was exhi
We had a mysterious (to us) slowdown today that I'm hoping someone can
explain just based on PG's principles of operation. It got better by itself
so it seems like it was "normal" behavior -- I just don't know what
behavior it was exhibiting.
We have a table of user notifications containing about
On Mon, May 20, 2019 at 09:37:34PM +, Deepak Somaiya wrote:
wow this is interesting!
@Tom, Bruce, David - Experts
Any idea why would changing the datatype would cause so much degradation - this
is even when plan remains the same ,data is same.
Deepak
On Friday, May 17, 2019, 2:36:05 AM P
wow this is interesting!
@Tom, Bruce, David - Experts
Any idea why would changing the datatype would cause so much degradation - this
is even when plan remains the same ,data is same.
Deepak
On Friday, May 17, 2019, 2:36:05 AM PDT, Naik, Sameer
wrote:
Deepak,
I changed the datatype
On Mon, May 20, 2019 at 08:20:33PM +0300, Mariel Cherkassky wrote:
Hey Greg,
Basically my backup was made after the first pg_resetxlog so I was wrong.
Bummer.
However, the customer had a secondary machine that wasn't synced for a
month. I have all the walls since the moment the second
On Mon, May 20, 2019 at 04:20:45PM +, Bimal wrote:
I had ran into same issue about year back, luckily I had standby to
quickly promote. But, I wish there was better a documentation on how to
handle WAL log fill up and resetting them.
pg_resetxlog is not a tool to deal with "WAL fil
Thanks for the suggestion. I created extended statistics objects for the
two tables in question. Unfortunately the resulting plan was the same (and
had the same estimates). It looks like the extended stats discovered a
potentially useful correlation on bag: "2, 3 => 1" (owner_id, bag_type_id
=> id)
Hey Greg,
Basically my backup was made after the first pg_resetxlog so I was wrong.
However, the customer had a secondary machine that wasn't synced for a
month. I have all the walls since the moment the secondary went out of
sync. Once I started it I hoped that it will start recover the wals and
f
I had ran into same issue about year back, luckily I had standby to quickly
promote. But, I wish there was better a documentation on how to handle WAL log
fill up and resetting them.
On Monday, May 20, 2019, 9:08:19 AM PDT, Mariel Cherkassky
wrote:
A backup was made after the corru
A backup was made after the corruption appeared but before I tried using
the pg_resetxlog command. Basically I just want to start the database with
the data that is available in the files(I'm ok with loosing data that was
in the cache and wasnt written to disk).
My question is how can I continue fr
Yes I understand that.. I'm trying to handle it after the backup that I
have taken..
On Mon, May 20, 2019, 5:49 PM Flo Rance wrote:
> Hi,
>
> First of all, as stated in the wiki, you'll need to do a filesystem level
> copy of the database files and put them on another drive before attempting
> t
Hi,
First of all, as stated in the wiki, you'll need to do a filesystem level
copy of the database files and put them on another drive before attempting
to do anything else !
https://wiki.postgresql.org/wiki/Corruption
regards,
Flo
On Mon, May 20, 2019 at 4:40 PM Mariel Cherkassky <
mariel.cher
Hey,
I'm trying to handle a corruption that one of our customers is facing.
His disk space was full and as a result of that he decided to run
pg_resetxlog a few times(bad idea..) .
When I connected to the machine I saw that the db was down.
When I started the db (service postgresql start) I saw the
18 matches
Mail list logo