I increased (and decreased) the stats target for the column and
re-analyzed. Didn't make a difference.
Is it possible that the row estimate is off because of a column other than
time? I looked at the # of events in that time period and 1.8 million is
actually a good estimate. What about the
((strp
On Mon, Aug 23, 2021 at 08:53:15PM -0400, Matt Dupree wrote:
> Is it possible that the row estimate is off because of a column other than
> time?
I would test this by writing the simplest query that reproduces the
mis-estimate.
> I looked at the # of events in that time period and 1.8 million is
Wouldn’t be easy if we have option to_schema ?
Absolutely, I should not alter current schema, as it live 24/7.
Thanks,RjOn Monday, August 23, 2021, 06:39:03 AM PDT, Jean-Christophe
Boggio wrote:
> The only way to do that is to create a new database, import the data
> there, rename th
The only way to do that is to create a new database, import the data
there, rename the schema and dump again.
Then import that dump into the target database.
Or maybe (if you can afford to have source_schema unavailable for some
time) :
* rename source_schema to tmp_source
* import (that wi
On Mon, 2021-08-23 at 09:44 +, Nagaraj Raj wrote:
> I know I can alter schema name after restoring but the problem is the name
> already exist and I don't want to touch that existing schema.
> The dump type is "custom".
>
> So effectively I want something like.
> pg_dump -U postgres --schema
Hi,
I know I can alter schema name after restoring but the problem is the name
already exist and I don't want to touch that existing schema.The dump type is
"custom".
So effectively I want something like.pg_dump -U postgres --schema
"source_schema" --format "c" --create --file "source_schema.b