Thanks a lot.
I just want to mention the situation where I was getting the
speculative token related inconsistency.
ItemPointer in backup page from master:
LOG: ItemPointer BlockNumber: 1 OffsetNumber:65534 Speculative: true
CONTEXT: xlog redo at 0/127F4A48 for Heap/INSERT+INIT: off 1
ItemPoin
On Fri, Aug 26, 2016 at 11:37 AM, Masahiko Sawada
wrote:
> On Fri, Aug 26, 2016 at 3:03 PM, Ashutosh Bapat
> wrote:
> >
> >
> > On Fri, Aug 26, 2016 at 11:22 AM, Masahiko Sawada >
> > wrote:
> >>
> >> On Fri, Aug 26, 2016 at 1:32 PM, Vinayak Pokale
> >> wrote:
> >> > Hi All,
> >> >
> >> > Ashu
On Fri, Aug 26, 2016 at 3:03 PM, Ashutosh Bapat
wrote:
>
>
> On Fri, Aug 26, 2016 at 11:22 AM, Masahiko Sawada
> wrote:
>>
>> On Fri, Aug 26, 2016 at 1:32 PM, Vinayak Pokale
>> wrote:
>> > Hi All,
>> >
>> > Ashutosh proposed the feature 2PC for FDW for achieving atomic commits
>> > across multip
On Fri, Aug 26, 2016 at 11:22 AM, Masahiko Sawada
wrote:
> On Fri, Aug 26, 2016 at 1:32 PM, Vinayak Pokale
> wrote:
> > Hi All,
> >
> > Ashutosh proposed the feature 2PC for FDW for achieving atomic commits
> > across multiple foreign servers.
> > If a transaction make changes to more than two f
On Fri, 26 Aug 2016 10:10:33 +0530
Mithun Cy wrote:
> On Thu, Mar 17, 2016 at 4:47 AM, David Steele
> wrote:
> >Since there has been no response from the author I have marked this
> >patch
> "returned with feedback". Please feel free >to resubmit for 9.7!
> I have started to work on this patch,
On Thu, Mar 17, 2016 at 4:47 AM, David Steele wrote:
>Since there has been no response from the author I have marked this patch
"returned with feedback". Please feel free >to resubmit for 9.7!
I have started to work on this patch, and tried to fix some of the issues
discussed above. The most rece
On 2016-08-26 13:07:09 +0900, Michael Paquier wrote:
> On Fri, Aug 26, 2016 at 12:54 PM, Amit Kapila wrote:
> > On Fri, Aug 26, 2016 at 9:04 AM, Michael Paquier
> > wrote:
> >> On Fri, Aug 26, 2016 at 12:25 PM, Amit Kapila
> >> wrote:
> >>> If we change the default to 64MB, then I think it won'
Hi All,
Ashutosh proposed the feature 2PC for FDW for achieving atomic commits
across multiple foreign servers.
If a transaction make changes to more than two foreign servers the current
implementation in postgres_fdw doesn't make sure that either all of them
commit or all of them rollback their c
* Bruce Momjian (br...@momjian.us) wrote:
> On Fri, Aug 26, 2016 at 11:39:29AM +0900, Michael Paquier wrote:
> > Now, one of the things discussed as well was that we may want to still
> > keep pg_xlog, and soft-link to pg_journal or whatever-the-new-name is
> > to not break the existing tools. Pers
2016-08-24 15:42 GMT+02:00 Tom Lane :
> Peter Eisentraut writes:
> > On 8/22/16 1:52 PM, Pavel Stehule wrote:
> >> If I understand to purpose of this patch - it is compromise - PL source
> >> is removed from table, but it is printed in result.
>
> > What does it do if you are displaying more than
On Fri, Aug 26, 2016 at 12:54 PM, Amit Kapila wrote:
> On Fri, Aug 26, 2016 at 9:04 AM, Michael Paquier
> wrote:
>> On Fri, Aug 26, 2016 at 12:25 PM, Amit Kapila
>> wrote:
>>> If we change the default to 64MB, then I think it won't allow to use
>>> old databases as-is because we store it in pg_
On Fri, Aug 26, 2016 at 11:55 AM, Bruce Momjian wrote:
> On Fri, Aug 26, 2016 at 11:39:29AM +0900, Michael Paquier wrote:
>> Now, one of the things discussed as well was that we may want to still
>> keep pg_xlog, and soft-link to pg_journal or whatever-the-new-name is
>> to not break the existing
On Fri, Aug 26, 2016 at 9:04 AM, Michael Paquier
wrote:
> On Fri, Aug 26, 2016 at 12:25 PM, Amit Kapila wrote:
>> If we change the default to 64MB, then I think it won't allow to use
>> old databases as-is because we store it in pg_control (I think one
>> will get below error [1] for old database
Gavin Flower wrote:
> On 26/08/16 05:43, Josh Berkus wrote:
> >The one thing I'd be worried about with the increase in size is folks
> >using PostgreSQL for very small databases. If your database is only
> >30MB or so in size, the increase in size of the WAL will be pretty
> >significant (+144MB
On Fri, Aug 26, 2016 at 12:25 PM, Amit Kapila wrote:
> If we change the default to 64MB, then I think it won't allow to use
> old databases as-is because we store it in pg_control (I think one
> will get below error [1] for old databases, if we just change default
> and don't do anything else). D
On Thu, Aug 25, 2016 at 10:29 PM, Robert Haas wrote:
> On Thu, Aug 25, 2016 at 11:21 AM, Simon Riggs wrote:
>> On 25 August 2016 at 02:31, Robert Haas wrote:
>>> Furthermore, there is an enforced, synchronous fsync at the end of
>>> every segment, which actually does hurt performance on write-he
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael Paquier
> 9.3 has addressed that by allowing streaming standbys to perform timeline
> jumps via the replication protocol. Doesn't this problem enter in this area?
IIUC, that new feature enable
On Thu, Aug 25, 2016 at 7:55 PM, Yury Zhuravlev
wrote:
> Hello hackers.
>
> I have a small question. While working on an incremental backup I noticed a
> strange thing.
> Newly created index is contains the invalid LSN (0/0).
> Exmaple:
> postgres=# select lsn from page_header(get_raw_page('test_a
On Fri, Aug 26, 2016 at 11:39:29AM +0900, Michael Paquier wrote:
> Now, one of the things discussed as well was that we may want to still
> keep pg_xlog, and soft-link to pg_journal or whatever-the-new-name is
> to not break the existing tools. Personally, I'd prefer a hard break.
> That would not
On Fri, Aug 26, 2016 at 11:33 AM, Tsunakawa, Takayuki
wrote:
> Our customer hit a problem of cascading replication, and we found the cause.
> They are using the latest PostgreSQL 9.2.18. The bug seems to have been
> fixed in 9.4 and higher during the big modification of xlog.c, but it's not
>
Hi all,
I am relaunching $subject as 10 development will begin soon. As far as
I know, there is agreement that we can do something here. Among the
different proposals I have found:
- pg_clog renamed to pg_commit_status, pg_xact or pg_commit
- pg_xlog renamed to pg_xjournal, pg_wal or pg_journal
A
Hello,
Our customer hit a problem of cascading replication, and we found the cause.
They are using the latest PostgreSQL 9.2.18. The bug seems to have been fixed
in 9.4 and higher during the big modification of xlog.c, but it's not reflected
in older releases.
The attached patch is for 9.2.1
* Venkata B Nagothi (nag1...@gmail.com) wrote:
> On Thu, Aug 25, 2016 at 10:59 PM, Stephen Frost wrote:
> > I'm not a fan of the "recovery_target" option, particularly as it's only
> > got one value even though it can mean two things (either "immediate" or
> > "not set"), but we need a complete so
On Fri, Aug 26, 2016 at 4:59 AM, neha khatri wrote:
> Hello,
>
> I noticed that a small optimization is possible in the flow of wait stat
> reporting for the LWLocks, when the pgstat_track_activities is disabled.
> If the check for pgstat_track_activities is done before invoking
> LWLockReportWa
On Thu, Aug 25, 2016 at 10:59 PM, Stephen Frost wrote:
> * Venkata B Nagothi (nag1...@gmail.com) wrote:
> > *Query 1*
> >
> > What about the existing parameter called "recovery_target" which accepts
> > only one value "immediate", which will be similar to the "promote" option
> > with the to-be-i
Michael,
* Michael Paquier (michael.paqu...@gmail.com) wrote:
> On Thu, Aug 25, 2016 at 10:25 PM, Bruce Momjian wrote:
> > On Wed, Aug 24, 2016 at 10:40:06PM -0300, Claudio Freire wrote:
> >> > time instead of requiring a recompile; we probably don't save any
> >> > significant number of cycles b
On Thu, Aug 25, 2016 at 10:25 PM, Bruce Momjian wrote:
> On Wed, Aug 24, 2016 at 10:40:06PM -0300, Claudio Freire wrote:
>> > time instead of requiring a recompile; we probably don't save any
>> > significant number of cycles by compiling this into the server.
>>
>> FWIW, +1
>>
>> We're already hu
Hello,
I noticed that a small optimization is possible in the flow of wait stat
reporting for the LWLocks, when the pgstat_track_activities is disabled.
If the check for pgstat_track_activities is done before invoking
LWLockReportWaitStart() instead of inside the pgstat_report_wait_start(),
it ca
On 26/08/16 05:43, Josh Berkus wrote:
On 08/25/2016 01:12 PM, Robert Haas wrote:
I agree that #4 is best. I'm not sure it's worth the cost. I'm not worried
at all about the risk of master/slave sync thing, per previous statement.
But if it does have performance implications, per Andres suggesti
Kevin Grittner wrote:
> On Thu, Aug 25, 2016 at 2:56 PM, Alvaro Herrera
> wrote:
>
> > I'm wondering about the TestForOldSnapshot call in scanPendingInsert().
> > Why do we apply it to the metapage buffer (line 1717 in master)?
>
> If there is any chance that GinPageGetMeta(page)->head could hav
On Thu, Aug 25, 2016 at 2:56 PM, Alvaro Herrera
wrote:
> I'm wondering about the TestForOldSnapshot call in scanPendingInsert().
> Why do we apply it to the metapage buffer (line 1717 in master)?
If there is any chance that GinPageGetMeta(page)->head could have
changed from a valid block number
On Wed, Aug 24, 2016 at 6:31 PM, Robert Haas wrote:
> 3. archive_timeout is no longer a frequently used option. Obviously,
> if you are frequently archiving partial segments, you don't want the
> segment size to be too large, because if it is, each forced segment
> switch potentially wastes a lar
This is a chance to add default configuration for Greek language if
somebody with good knowledge could follow this development.
Oleg
-- Forwarded message --
From: Oleg Smirnov
Date: Thu, Aug 25, 2016 at 5:26 PM
Subject: [Snowball-discuss] Greek stemmer
To: "snowball-discu."
Hi
On Thu, Aug 25, 2016 at 2:56 PM, Alvaro Herrera
wrote:
> Kevin Grittner wrote:
>> Add the "snapshot too old" feature
>
>> src/backend/access/gin/ginbtree.c | 9 +-
>> src/backend/access/gin/gindatapage.c | 7 +-
>> src/backend/access/gin/ginget.c
Robert Haas wrote:
> On Thu, Aug 25, 2016 at 3:21 PM, Alvaro Herrera
> wrote:
> > Does it work to set the minimum to one WAL segment, i.e. 64MB? guc.c
> > has a hardcoded minimum of 2, but I couldn't find an explanation for it.
>
> Well, I think that when you overrun the end of one segment, you
On Thu, Aug 25, 2016 at 12:59 PM, Peter Geoghegan wrote:
> Maybe we should change the ordering of those IndexInfo structs to
> something more suitable, but it must be immutable (it cannot hinge
> upon the details of one particular DML statement).
I meant that it must be stable (not immutable), in
On Thu, Aug 25, 2016 at 12:16 PM, Tom Lane wrote:
> I'm not following. The only insertions happening in this test case
> are coming from various clients doing the same INSERT ON CONFLICT UPDATE.
> If one of them has successfully inserted "42" into the arbiter index,
> how is it that other ones ar
Kevin Grittner wrote:
> Add the "snapshot too old" feature
> src/backend/access/gin/ginbtree.c | 9 +-
> src/backend/access/gin/gindatapage.c | 7 +-
> src/backend/access/gin/ginget.c| 22 +-
> src/backend/access/gin/gininsert.c
On Thu, Aug 25, 2016 at 3:21 PM, Alvaro Herrera
wrote:
> Yeah, and it's also related to the point Josh Berkus was making about
> clusters with little activity.
Right.
> Does it work to set the minimum to one WAL segment, i.e. 64MB? guc.c
> has a hardcoded minimum of 2, but I couldn't find an ex
On Thu, Aug 25, 2016 at 04:21:33PM +0100, Simon Riggs wrote:
> If we do have the pain of change, should we also consider making WAL
> files variable length? What do we gain by having the files all the
> same size? ISTM better to have WAL files that vary in length up to 1GB
> in size.
>
> (This is
On 08/25/2016 10:08 PM, Peter Geoghegan wrote:
On Thu, Aug 25, 2016 at 11:49 AM, Tom Lane wrote:
I think the point is that given the way he's set up the test case,
there should be no duplicate violation in the plain unique index
unless there is one in the arbiter index. So assuming that INSERT
Robert Haas wrote:
> On Thu, Aug 25, 2016 at 2:49 PM, Alvaro Herrera
> wrote:
> > I think the relevant one for that case is the minimum, though:
> >
> > #min_wal_size = 80MB
> >
> > which corresponds to 5 segments. I suppose the default value for this
> > minimum would change to some multiple of
Peter Geoghegan writes:
> (I don't think that it matters which order anything is tested
> in, though, because not finding a dup value in the arbiter index does
> not guarantee that there won't be one in the other index. There is no
> locking when no conflict is initially found, and so no guarantee
On Thu, Aug 25, 2016 at 11:49 AM, Tom Lane wrote:
> I think the point is that given the way he's set up the test case,
> there should be no duplicate violation in the plain unique index
> unless there is one in the arbiter index. So assuming that INSERT
> tests the arbiter indexes first, there sh
On Thu, Aug 25, 2016 at 2:49 PM, Alvaro Herrera
wrote:
> Robert Haas wrote:
>> On Thu, Aug 25, 2016 at 1:43 PM, Josh Berkus wrote:
>
>> > The one thing I'd be worried about with the increase in size is folks
>> > using PostgreSQL for very small databases. If your database is only
>> > 30MB or so
BTW, this example also exposes another problem, in that the report is
>> ERREUR: unknown error severity
>> CONTEXT: parallel worker
Since, in fact, this error was thrown from leader code, that CONTEXT
report is outright misleading. It's easy to figure that out in this
particular case because t
Robert Haas wrote:
> On Thu, Aug 25, 2016 at 1:43 PM, Josh Berkus wrote:
> > The one thing I'd be worried about with the increase in size is folks
> > using PostgreSQL for very small databases. If your database is only
> > 30MB or so in size, the increase in size of the WAL will be pretty
> > si
Peter Geoghegan writes:
> On Thu, Aug 25, 2016 at 7:12 AM, Ivan Frolkov wrote:
>> So, if we have primary key and unique constraint on a table then upsert will
>> not work as would expected.
> Why is this unexpected?
> You only take the alternative path (UPDATE) in the event of a would-be
> dupl
On Thu, Aug 25, 2016 at 1:50 PM, Andres Freund wrote:
> On 2016-08-25 13:45:29 -0400, Robert Haas wrote:
>> I think you may be forgetting that "the base 3 WAL segments" is no
>> longer the default configuration. checkpoint_segments=3 is history;
>> we now have max_wal_size=1GB, which is a maximum
On 2016-08-25 13:45:29 -0400, Robert Haas wrote:
> I think you may be forgetting that "the base 3 WAL segments" is no
> longer the default configuration. checkpoint_segments=3 is history;
> we now have max_wal_size=1GB, which is a maximum of 64 WAL segments,
> not 3.
Well, but min_wal_size still
On Thu, Aug 25, 2016 at 7:45 PM, Robert Haas wrote:
> On Thu, Aug 25, 2016 at 1:43 PM, Josh Berkus wrote:
> > On 08/25/2016 01:12 PM, Robert Haas wrote:
> >>> I agree that #4 is best. I'm not sure it's worth the cost. I'm not
> worried
> >>> > at all about the risk of master/slave sync thing, pe
On Thu, Aug 25, 2016 at 1:43 PM, Josh Berkus wrote:
> On 08/25/2016 01:12 PM, Robert Haas wrote:
>>> I agree that #4 is best. I'm not sure it's worth the cost. I'm not worried
>>> > at all about the risk of master/slave sync thing, per previous statement.
>>> > But if it does have performance impl
On Thu, Aug 25, 2016 at 1:19 PM, Tom Lane wrote:
>> It's probably best to try
>> to hack things somehow so that the worker localizes nothing and the
>> leader localizes everything.
>
> No way that's gonna work. For example, the expected report in
> English for the example above is
> ERROR
On 08/25/2016 01:12 PM, Robert Haas wrote:
>> I agree that #4 is best. I'm not sure it's worth the cost. I'm not worried
>> > at all about the risk of master/slave sync thing, per previous statement.
>> > But if it does have performance implications, per Andres suggestion, then
>> > making it confi
Robert Haas writes:
> On Thu, Aug 25, 2016 at 11:43 AM, Tom Lane wrote:
>> Ooops. Indeed, that is broken:
>> postgres=# select stringu1::int2 from tenk1 where unique1 = 1;
>> ERREUR: unknown error severity
>> CONTEXT: parallel worker
> Uggh. Obviously, I failed to realize that those strings
On Thu, Aug 25, 2016 at 1:13 PM, Robert Haas wrote:
> On Thu, Aug 25, 2016 at 10:25 AM, Yury Zhuravlev
> wrote:
>> I have a small question. While working on an incremental backup I noticed a
>> strange thing.
>> Newly created index is contains the invalid LSN (0/0).
>> Exmaple:
>> postgres=# sele
On Thu, Aug 25, 2016 at 1:05 PM, Magnus Hagander wrote:
>> 1. Do nothing. So far, I don't see anybody arguing for that.
>>
>> 2. Change the default to 64MB and call it good. This idea seems to
>> have considerable support.
>>
>> 3. Allow initdb-time configurability but keep the default at 16MB.
On Thu, Aug 25, 2016 at 10:25 AM, Yury Zhuravlev
wrote:
> I have a small question. While working on an incremental backup I noticed a
> strange thing.
> Newly created index is contains the invalid LSN (0/0).
> Exmaple:
> postgres=# select lsn from page_header(get_raw_page('test_a_idx2',0));
> lsn
On Thu, Aug 25, 2016 at 7:12 AM, Ivan Frolkov wrote:
> So, if we have primary key and unique constraint on a table then upsert will
> not work as would expected.
Why is this unexpected?
You only take the alternative path (UPDATE) in the event of a would-be
duplicate violation. You can't upsert w
On Thu, Aug 25, 2016 at 6:59 PM, Robert Haas wrote:
> On Thu, Aug 25, 2016 at 11:21 AM, Simon Riggs
> wrote:
> > On 25 August 2016 at 02:31, Robert Haas wrote:
> >> Furthermore, there is an enforced, synchronous fsync at the end of
> >> every segment, which actually does hurt performance on wri
On Thu, Aug 25, 2016 at 11:43 AM, Tom Lane wrote:
> Ooops. Indeed, that is broken:
>
> postgres=# select 1/0; -- using French locale
> ERREUR: division par zéro
> postgres=# set force_parallel_mode=1;
> SET
> postgres=# select stringu1::int2 from tenk1 where unique1 = 1;
> ERREUR: unknown err
Kuntal Ghosh wrote:
> 4. For Speculative Heap tuple insert operation, there was
> inconsistency in t_ctid value. So, I've modified the t_ctid value (in
> backup page) to current block number and offset number. Need
> suggestions!!
In speculative insertions, t_ctid is used to store the speculative
On Thu, Aug 25, 2016 at 11:21 AM, Simon Riggs wrote:
> On 25 August 2016 at 02:31, Robert Haas wrote:
>> Furthermore, there is an enforced, synchronous fsync at the end of
>> every segment, which actually does hurt performance on write-heavy
>> workloads.[2] Of course, if that were the only reaso
On Thu, Aug 25, 2016 at 12:21 PM, Simon Riggs wrote:
> On 25 August 2016 at 02:31, Robert Haas wrote:
>
>> Furthermore, there is an enforced, synchronous fsync at the end of
>> every segment, which actually does hurt performance on write-heavy
>> workloads.[2] Of course, if that were the only rea
Hi,
I've added the feature in CP app. Following are the testing details:
1. In master, I've enabled following configurations:
* wal_level = replica
* max_wal_senders = 3
* wal_keep_segments = 4000
* hot_standby = on
* wal_consistency_mask = 511 /* Enable consistency check mask bit*/
2. In slav
Jakob Egger writes:
> My PostgreSQL client checks the PG_DIAG_SEVERITY error field to determine the
> error level.
> However, I have now learned that this field is localized. This means that a
> server configured with --enable-nls might for example return the string
> ERREUR instead of ERROR.
Hello Peter,
My 0.02€ (not $0.02:-) comments on this patch:
Patch applies and "make check" is ok. I see no issue with the code.
A few comments below.
The regression tests are clearer & commented, it is an improvement.
While you are at it, maybe you could consider adding tests for more
featu
* Robert Haas (robertmh...@gmail.com) wrote:
> On Thu, Aug 25, 2016 at 10:34 AM, Stephen Frost wrote:
> >> My point wasn't really that archive_command should actually be simple.
> >> My point was that if it's being run multiple times per second, there
> >> are additional challenges that wouldn't a
Hi,
My PostgreSQL client checks the PG_DIAG_SEVERITY error field to determine the
error level.
However, I have now learned that this field is localized. This means that a
server configured with --enable-nls might for example return the string ERREUR
instead of ERROR.
So if I want to determine
On 25 August 2016 at 02:31, Robert Haas wrote:
> Furthermore, there is an enforced, synchronous fsync at the end of
> every segment, which actually does hurt performance on write-heavy
> workloads.[2] Of course, if that were the only reason to consider
> increasing the segment size, it would prob
On Thu, Aug 25, 2016 at 9:34 AM, Magnus Hagander wrote:
> Because it comes with the cluster during replication. I think it's more
> likely that you accidentally end up with two instances compiled with
> different values than that you get an issue from this.
I hadn't thought about it that way, but
On Thu, Aug 25, 2016 at 10:39 AM, Bruce Momjian wrote:
> Another issue is that many users are coming from database products that
> have significant performance hits in switching WAL files so they might
> be tempted to set very high segment sizes in inappropriate cases.
Well, we have some hit ther
On Wed, Aug 24, 2016 at 08:52:20PM -0700, Andres Freund wrote:
> On 2016-08-24 23:26:51 -0400, Robert Haas wrote:
> > On Wed, Aug 24, 2016 at 10:54 PM, Andres Freund wrote:
> > > and I'm also rather doubtful that it's actually without overhead.
> >
> > Really? Where do you think the overhead wou
On Thu, Aug 25, 2016 at 10:34 AM, Stephen Frost wrote:
>> My point wasn't really that archive_command should actually be simple.
>> My point was that if it's being run multiple times per second, there
>> are additional challenges that wouldn't arise if it were being run
>> only every 5-10 seconds.
Robert,
* Robert Haas (robertmh...@gmail.com) wrote:
> On Thu, Aug 25, 2016 at 9:48 AM, Stephen Frost wrote:
> > * Robert Haas (robertmh...@gmail.com) wrote:
> >> Meanwhile, we'll significantly help people who are currently
> >> generating painfully large but not totally insane numbers of WAL
> >
Hello hackers.
I have a small question. While working on an incremental backup I noticed a
strange thing.
Newly created index is contains the invalid LSN (0/0).
Exmaple:
postgres=# select lsn from page_header(get_raw_page('test_a_idx2',0));
lsn
-
0/0
(1 row)
Can you explain me why?
Than
On Thu, Aug 25, 2016 at 9:48 AM, Stephen Frost wrote:
> * Robert Haas (robertmh...@gmail.com) wrote:
>> Meanwhile, we'll significantly help people who are currently
>> generating painfully large but not totally insane numbers of WAL
>> segments. Someone who is currently generating 32,768 WAL segm
Suppose we have some table
create table cnt(
usr_id int primary key,
usr_doc_ref text not null,
cnt int,
sum int
);
And going to run some insert on conflict update on it (pgbench script):
\setrandom id 1 50
insert into cnt as c(usr_id,usr_doc_ref, cnt) values(:id, '#'||:id, 1) on
con
Robert,
* Robert Haas (robertmh...@gmail.com) wrote:
> Meanwhile, we'll significantly help people who are currently
> generating painfully large but not totally insane numbers of WAL
> segments. Someone who is currently generating 32,768 WAL segments per
> day - about one every 2.6 seconds - will
On Thu, Aug 25, 2016 at 5:32 AM, Tom Lane wrote:
> Robert Haas writes:
> > On Wed, Aug 24, 2016 at 10:33 PM, Tom Lane wrote:
> >> ... but I think this is just folly. You'd have to do major amounts
> >> of work to keep, eg, slave servers on the same page as the master
> >> about what the segmen
On Thu, Aug 25, 2016 at 1:04 AM, Wolfgang Wilhelm
wrote:
> What would happen if there's a database on a server with initdb (or
> whatever) parameter -with-wal-size=64MB and later someone decides to make it
> the master in a replicated system and has a slave without that parameter?
> Would the slav
I wrote:
> There is something rotten in the state of Denmark. Here are four recent
> runs that failed with unexpected "cache lookup failed for type "
> errors:
> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grouse&dt=2016-08-16%2008%3A39%3A03
> http://buildfarm.postgresql.org/cgi-bi
On Wed, Aug 24, 2016 at 10:40:06PM -0300, Claudio Freire wrote:
> > time instead of requiring a recompile; we probably don't save any
> > significant number of cycles by compiling this into the server.
>
> FWIW, +1
>
> We're already hurt by the small segments due to a similar phenomenon
> as the
Amit Kapila wrote:
> On Wed, Aug 24, 2016 at 11:46 PM, Alvaro Herrera
> wrote:
> > Can you split the new xlog-related stuff to a new file, say hash_xlog.h,
> > instead of cramming it in hash.h? Removing the existing #include
> > "xlogreader.h" from hash.h would be nice. I volunteer for pushing
Github tracker: https://github.com/2ndQuadrant/pglogical/issues
You can also send an email to pglogical-l...@2ndquadrant.com.
- Umair
On Thu, Aug 25, 2016 at 7:01 AM, Muhammed Roshan
wrote:
> Hi Craig,
>
> Could you please let me know how to access the github tracker?
>
> Regards,
> Muhammed R
Hi Craig,
Could you please let me know how to access the github tracker?
Regards,
Muhammed Roshan
On Thu, Aug 25, 2016 at 9:25 AM, Craig Ringer wrote:
> On 24 August 2016 at 19:42, roshan_myrepublic
> wrote:
>
> > Now, I am able to see the last added row in my subscriber table. All the
> > ot
* Venkata B Nagothi (nag1...@gmail.com) wrote:
> *Query 1*
>
> What about the existing parameter called "recovery_target" which accepts
> only one value "immediate", which will be similar to the "promote" option
> with the to-be-introduced new parameter.
> Since this parameter's behaviour will be
> On 20 Aug 2016, at 15:59, Craig Ringer wrote:
>
> I'll wait for a test case or some more detail.
Thanks for clarification about how restart_lsn is working.
Digging slightly deeper into this topic revealed that problem was in two phase
decoding, not it logical decoding itself.
While I was wri
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation:tested, passed
I did some tests and found nothing special. The stated resour
OK. as I said just got confused if there was any way to disable. anyway the
code is ok, does what it says and is well formatted.
I will change now!
So sorry, for my confused.
Thnks!
Em quinta-feira, 25 de agosto de 2016, Corey Huinker <
corey.huin...@gmail.com> escreveu:
>
>
> On Wed, Aug 24, 2
On Thu, Aug 25, 2016 at 10:25 AM, Martín Marqués wrote:
> 2016-08-24 21:34 GMT-03:00 Michael Paquier :
>>
>> Yes, you are right. If I look at the diffs this morning I am seeing
>> the ACLs being dumped for this aggregate. So we could just fix the
>> test and be done with it. I did not imagine the
On Thu, Aug 25, 2016 at 3:41 PM, Artur Zakirov wrote:
>>> You are right. I assigned to prev_type NODE_TYPE_SPACE to be able to
>>> execute such query:
>>>
>>>
>>> SELECT to_timestamp('---2000JUN', ' MON');
>>>
>>>
>>> Will be it a proper behaviour?
>>
>>
>>
>> Looks good to me, no one will
You are right. I assigned to prev_type NODE_TYPE_SPACE to be able to
execute such query:
SELECT to_timestamp('---2000JUN', ' MON');
Will be it a proper behaviour?
Looks good to me, no one will complain if something working on PG but not on
Oracle.
Thanks. I've created the entry i
Hello, gurus,
I faced incorrect checksum on "global/pg_filenode.map" at the right timing
"VACUUM FULL" is executed and session was aborted.
Aug 16 20:51:19 postgres[22329]: [2-1] FATAL: relation mapping file
"global/pg_filenode.map" contains incorrect checksum
Aug 16 20:51:19 postgres[22329]
On Thursday, August 25, 2016 1:56 PM, Artur Zakirov
wrote:
>> #2. Warning at compilation;
>>
>> formatting.c: In function ‘do_to_timestamp’:
>> formatting.c:3049:37: warning: ‘prev_type’ may be used uninitialized in this
>> function [-Wmaybe-uninitialized]
>> if (prev_type == NODE_TYPE_SPACE ||
Hi,
#1.
Whitespace @ line # 317.
Sorry, fixed.
#2. Warning at compilation;
formatting.c: In function ‘do_to_timestamp’:
formatting.c:3049:37: warning: ‘prev_type’ may be used uninitialized in this
function [-Wmaybe-uninitialized]
if (prev_type == NODE_TYPE_SPACE || prev_type == NODE_TYPE_S
On Thu, Aug 25, 2016 at 12:22 PM, Amit Langote <
langote_amit...@lab.ntt.co.jp> wrote:
> On 2016/08/22 13:51, Ashutosh Bapat wrote:
> > The parent-child relationship of multi-level partitioned tables is not
> > retained when creating the AppendRelInfo nodes. We create RelOptInfo
> nodes
> > for al
97 matches
Mail list logo