PegoraroF10 wrote:
> For replication purposes only, there are any difference between pg_basebackup
> or dump to copy data from Master to Slave ?
> On Docs is written that pg_basebackup can be used both for point-in-time
> recovery and as the starting point for a log shipping or streaming
> replicat
On Fri, Apr 6, 2018 at 2:34 AM, Tom Lane wrote:
> a...@novozymes.com (Adam =?utf-8?Q?Sj=C3=B8gren?=) writes:
> >> [... still waiting for the result, I will return with what it said
> >> when the server does ...]
>
> > It did eventually finish, with the same result:
>
> Huh. So what we have here,
Hi,
Some time ago, I had this errors frequently showed in logs after some
autovacuum in some tables(pg 9.6). VACUUM FULL or CLUSTER in this tables
show the same and not complete the tasks (showed by some table bloat
select).
Then, I did a full dump/restore into a new version (10.2) and everything
On Thu, Apr 5, 2018 at 3:39 PM, hmidi slim wrote:
> I want to know what are the best practice to use in order to decompose a
> big query which contains so many joins.Is it recommended to use stored
> procedures ? or is there any other solution?
>
Views are another solution.
https://www.postgres
Hi,
I want to know what are the best practice to use in order to decompose a
big query which contains so many joins.Is it recommended to use stored
procedures ? or is there any other solution?
a...@novozymes.com (Adam =?utf-8?Q?Sj=C3=B8gren?=) writes:
>> [... still waiting for the result, I will return with what it said
>> when the server does ...]
> It did eventually finish, with the same result:
Huh. So what we have here, apparently, is that regular MVCC snapshots
think there is exa
Bruce Momjian writes:
> On Wed, Apr 4, 2018 at 08:29:06PM -0400, Bruce Momjian wrote:
>
>> On Wed, Apr 4, 2018 at 07:13:36PM -0500, Jerry Sievers wrote:
>> > Bruce Momjian writes:
>> > > Is it possible that pg_upgrade used 50M xids while upgrading?
>> >
>> > Hi Bruce.
>> >
>> > Don't think s
On Wed, Apr 4, 2018 at 08:29:06PM -0400, Bruce Momjian wrote:
> On Wed, Apr 4, 2018 at 07:13:36PM -0500, Jerry Sievers wrote:
> > Bruce Momjian writes:
> > > Is it possible that pg_upgrade used 50M xids while upgrading?
> >
> > Hi Bruce.
> >
> > Don't think so, as I did just snap the safety sn
Adam writes:
> efamroot@kat efam=# explain select chunk_id, chunk_seq, ctid, xmin, xmax,
> length(chunk_data) from pg_toast.pg_toast_10919630 where chunk_id =
> 1698936148 order by 1,2;
>QUERY PLAN
>
>
>
Jerry Sievers writes:
> Bruce Momjian writes:
>
>> On Wed, Apr 4, 2018 at 07:13:36PM -0500, Jerry Sievers wrote:
>>
>>> Bruce Momjian writes:
>>> > Is it possible that pg_upgrade used 50M xids while upgrading?
>>>
>>> Hi Bruce.
>>>
>>> Don't think so, as I did just snap the safety snap and r
Jorge Daniel writes:
> I have a problem with a query that grabs a bunch of rows and then does an
> aggreate operation, at that moment it gots killed by OOM-killer, I don't know
> why, the engine starts using tmpfiles as expected , and then tries to work
> in memory and gots killed.
> SELECT
Hi Guys:
I have a problem with a query that grabs a bunch of rows and then does an
aggreate operation, at that moment it gots killed by OOM-killer, I don't know
why, the engine starts using tmpfiles as expected , and then tries to work in
memory and gots killed.
I've test it in an small envir
Tom writes:
>> And when I run the suggested query, I get:
>
>> efamroot@kat efam=# select chunk_id, chunk_seq, ctid, xmin, xmax,
>> length(chunk_data) from pg_toast.pg_toast_10919630 where chunk_id =
>> 1698936148 order by 1,2;
>> chunk_id | chunk_seq | ctid |xmin| xmax |
For replication purposes only, there are any difference between pg_basebackup
or dump to copy data from Master to Slave ?
On Docs is written that pg_basebackup can be used both for point-in-time
recovery and as the starting point for a log shipping or streaming
replication standby servers.
We are
a...@novozymes.com (Adam =?utf-8?Q?Sj=C3=B8gren?=) writes:
> Here's a statement which currently gives an unexpected chunk error:
> efamroot@kat efam=# SELECT * FROM efam.sendreference WHERE id = '189909908';
> ERROR: unexpected chunk number 0 (expected 1) for toast value 1698936148
> in pg_t
Adam writes:
> Here's a statement which currently gives an unexpected chunk error:
>
> efamroot@kat efam=# SELECT * FROM efam.sendreference WHERE id = '189909908';
> ERROR: unexpected chunk number 0 (expected 1) for toast value 1698936148
> in pg_toast_10919630
>
> And when I run the suggest
Tom writes:
> a...@novozymes.com (Adam =?utf-8?Q?Sj=C3=B8gren?=) writes:
>> Also, the error we are getting is now: "unexpected chunk number 2
>> (expected 3) for toast value 1498303849 in pg_toast_10919630", where
>> previously we've only seen "unexpected chunk number 0 (expected 1)".
>
>> We are
Can you pass full query & how many rows each table has & how often the
tables change & full explain ?
On Thu, Apr 5, 2018 at 8:01 AM, wrote:
> Did you look at this approach using dblink already?
>
> https://gist.github.com/mjgleaso/8031067
>
> In your situation, you will have to modify the examp
18 matches
Mail list logo