so there is a way
to make connections without relying on DNS.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
uplicate that behavior, without success. What PG version is
that, exactly? Have you vacuumed and/or analyzed those two tables? What
do you get for
select * from pg_stats where tablename = 'wg3ppbm_userpartner';
and likewise for wg3ppbm_partner?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
George writes:
> On Thu, Dec 1, 2016 at 6:58 PM, Tom Lane wrote:
>> What
>> do you get for
>> select * from pg_stats where tablename = 'wg3ppbm_userpartner';
>> and likewise for wg3ppbm_partner?
> It is a wide table. Do you want me to dump csv here?
Sho
plication is causing that and take steps to
mitigate it.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
there's also backwards
compatibility to worry about.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ectly to tsquery rather than passing it through to_tsquery(),
though likely that would just have a different set of failure modes
with queries where you do wish stemming would occur.
The problem with "no" seems to be the same.
regards, tom lane
--
Sent via pgs
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one
256 char data field; t1 has always got 1 row, while t2 is increasing as
explained in the following.
My pqlib client countinously updates one row in t1 (every time targeting a
different row) and inserts a new row in
t it
just dies without saying anything.
Hmm ... a different take on that is that maybe psql is crashing
because it's linking to an ABI-incompatible libpq. You should try
"ldd" on the psql executable and see if it's resolving the libpq
dependency to the copy you intended.
semicolons --- does the
psql prompt change to match?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
o
that the WHERE condition never succeeds. You could set log_statement=all
and look in the postmaster log to see what the server thinks it's getting.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your su
s behave normally and which don't. Of course, the other two
would have to be told --host=/tmp to talk to the handbuilt server.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
sing arrays or multiple tables).
Yeah, this is a bug, but fortunately the fix is pretty trivial. See
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=da05d0ebc637a84ba41a172b32552557ebad199f
regards, tom lane
--
Sent via pgsql-general mailing list (pg
n index matching one
but not the other. Right now we're very stupid and only consider x,y,
but if there were room to consider more than one set of target pathkeys
it would be fairly simple to make that better.
regards, tom lane
--
Sent via pgsql-general mailing l
016-12-02 at 13:45 -0800, Adrian Klaver wrote:
> >
> > On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
> > >
> > >
> > > Hi,
> > > I've two tables, t1 and t2, both with one bigint id indexed field
> > > and
> > > one 256 cha
at the moment what are the user-visible cases where this happens.)
You can find probably more than you want to know about deadlock handling
in src/backend/storage/lmgr/README.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
works.
Ah! So, most likely, there is something wrong with the local installation
of "more", or whatever the environment variable PAGER is set to. If you
say "more somefile", does it behave reasonably? Check "echo $PAGER"
as well.
regards,
tivity).
Regards
2016-12-05 20:02 GMT+01:00 Jeff Janes :
> On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo wrote:
>
>> Hi,
>> I've two tables, t1 and t2, both with one bigint id indexed field and one
>> 256 char data field; t1 has always got 1 row, while t2 i
Hi,
I did: pkill -x postgres
so it should send SIGTERM.
Regards
Pupillo
at some remove from the popen call, but if it's not
unreasonably hairy we should do it.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I've a table in which a field is BYTEA, as I need to store around 200 raw
bytes in this field.
I need to perform many many INSERT starting from a common C array and, in
order to get good performance, I want to do many of them in a single BEGIN
COMMIT block.
What is the best choice from libpq?
Tom DalPozzo writes:
> I've a table in which a field is BYTEA, as I need to store around 200 raw
> bytes in this field.
> I need to perform many many INSERT starting from a common C array and, in
> order to get good performance, I want to do many of them in a single BEGIN
>
[ please keep the list cc'd ]
Tom DalPozzo writes:
> To be honest, I didn't know or I forgot about multiple VALUES in one
> command! Thanks for reminding!
> As for the PQexecParams, should I specify something in const Oid
> *paramTypes parameter? Or just something like $1:
that if PAGER is empty or all white space
then we should not try to use it as a shell command; we can either
treat the case as "pager off" or as "use default pager". Everything
else we can leave to the invoked shell to complain about.
Comments?
regards,
hat you need to write/read
it in chunks rather than all at once, the large-object APIs are what
you want.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
#x27;s irrelevant here, because it
can easily be shown that psql doesn't behave nicely if PAGER is set to
empty when it does try to use the pager.
regards, tom lane
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 261652a..9915731 100
ee
https://wiki.postgresql.org/wiki/Systemd
or the longer discussion at
https://www.postgresql.org/message-id/flat/57828C31.5060409%40gmail.com
or a couple of other discussions you can find by searching the PG mail
archives for "systemd semaphores".
regards
Hi,
I tried both ways: they're ok.
Also, multiple VALUES in one INSERT is actually better as performance.
Thanks again
Pupillo
2016-12-06 19:49 GMT+01:00 Tom Lane :
> [ please keep the list cc'd ]
>
> Tom DalPozzo writes:
> > To be honest, I didn't know or I f
ffling
> behavior if I were using an xterm: with a blank PAGER your output
> would disappear only if the select exceeded a certain number of
> lines...
Yeah, that was exactly the behavior I was seeing before fixing it
(the fix is pushed btw).
regards, tom lane
--
OM API is preferred
or deprecated. It's probably just cosmetic anyway, so I'd say ignore it.
9.5 and up have a better design for this, in which the behavior is
determined on the fly rather than being hard-wired at build time.
regards, tom lane
--
Sent via pgsql-gene
y detailed help, but
I believe that's possible.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
socket.
Also, you can use -h /path/to/socket/dir to specify connecting
using a socket file in a specific directory.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
END LOOP;
> But Eg[i] is assigning null to array_value
I think you want
array_value = Eg[i][array_lower(Eg, 2):array_upper(Eg, 2)]
As of 9.6 you could use the shorthand
array_value = Eg[i][:]
regards, tom lane
--
Sent via pgsql-general mai
Hi,
I did two tests:
TEST 1
1 I created a table ("Table") with two fields, one ("Id") is a bigint and
the other ("Data") is a bytea. Also created an index on Id.
2 Populated the table with 1 rows, in which the bigint is incremental
and bytea is 1000 bytes long.
3 Executed SELECT COUNT(*) FROM
uspicious that this might be related to commit 2b3a8b20c,
which went into 9.3.6, but it's probably premature to blame that.)
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields
('d0','d1',...,'d4').
I populated the table with 1 rows; each d.. field inizialized with 20
bytes.
Reported table size is 1.5MB. OK.
Now, for 1000 times, I update 2000 different rows each time, changing d0
filed keepi
's an index, but I'm reading
all the rows).
Regards
Pupillo
2016-12-09 17:16 GMT+01:00 Adrian Klaver :
> On 12/09/2016 08:03 AM, Tom DalPozzo wrote:
> > Hi,
> > I did two tests:
> > TEST 1
> > 1 I created a table ("Table") with two fields, one (&qu
#x27;s a total of 50 millions updates per day, hence (50millions * 100
bytes *2 fields updated) 10Gbytes net per day.
I'm afraid it's not possible, according to my results.
Reagrds
Pupillo
2016-12-10 13:38 GMT+01:00 Francisco Olarte :
> Hi Tom
>
> On Sat, Dec 10,
Hi,
I'd like to do that! But my DB must be crash proof! Very high reliability
is a must.
I also use sycn replication.
Regards
Pupillo
2016-12-10 16:04 GMT+01:00 Rob Sargent :
>
> > On Dec 10, 2016, at 6:25 AM, Tom DalPozzo wrote:
> >
> > Hi,
> > you're ri
2016-12-10 16:36 GMT+01:00 Rob Sargent :
>
> > On Dec 10, 2016, at 7:27 AM, Tom DalPozzo wrote:
> >
> > Hi,
> > I'd like to do that! But my DB must be crash proof! Very high
> reliability is a must.
> > I also use sycn replication.
> > Regards
2016-12-10 15:41 GMT+01:00 Adrian Klaver :
> On 12/10/2016 04:21 AM, Tom DalPozzo wrote:
>
>> Hi,
>> my release is 9.5.4.
>> a took a look over it. I guessed that counting could be slow because it
>> needs to read everything and also that it can take advantage
2016-12-10 18:10 GMT+01:00 Rob Sargent :
>
> > On Dec 10, 2016, at 10:01 AM, Tom DalPozzo wrote:
> >
> > 2016-12-10 16:36 GMT+01:00 Rob Sargent :
> >
> > > On Dec 10, 2016, at 7:27 AM, Tom DalPozzo
> wrote:
> > >
> > > Hi,
> >
2016-12-10 18:33 GMT+01:00 Francisco Olarte :
> Tom:
>
> On Sat, Dec 10, 2016 at 6:01 PM, Tom DalPozzo
> wrote:
> > As for crash proof, I meant that once my client app is told that her
> update
> > request was committed, it mustn't get lost (hdd failure apart of
rs ) to properly read your messages.
>
> If you want to discourage people replying to you, keep doing the two above.
>
> On Sat, Dec 10, 2016 at 3:25 PM, Tom DalPozzo
> wrote:
> > you're right, VACUUM FULL recovered the space, completely.
>
> Well, it always does. ;
osts if none of the out-of-line values change.
I don't remember offhand what corner cases might exist to prompt the
weasel wording "normally". Maybe that just reflects the possibility
that one of the newly updated values would need toasting.
regards, tom
t)) select
> max(x) from (select x from i union all select y from j) b;
> ERROR: could not find plan for CTE "i"
Yup, sure looks like a bug to me, especially since it seems to work as
expected before 9.5. No idea offhand what broke it.
regards, tom lane
-bash-4.1$ pg_dump -d postgres -t '"Statuses"'
> pg_dump: no matching tables were found
> -bash-4.1$ pg_dump -d postgres -t 'public."Statuses"'
> pg_dump: no matching tables were found
These cases work for me. Maybe your shell
nks for the report!
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thomas Kellerer writes:
> Tom Lane schrieb am 13.12.2016 um 18:02:
>> These cases work for me. Maybe your shell is doing something weird
>> with the quotes?
> Hmm, that's the default bash from CentOS 6 (don't know the exact version)
I'm using bash from curren
pg_typeof(1::uint4);
:: binds tighter than minus, so you would need to write these like
"(-1)::uint4" to get the behavior you're expecting. See
https://www.postgresql.org/docs/9.5/static/sql-syntax-lexical.html#SQL-PRECEDENCE
regards, tom lane
--
Sent via
Hi,
it's not clear to me when tuple data (max 1000 bytes total) get compressed
on disk and when not.
I tried with pg_column_size to find the tuple's size but I get ambiguous
results.
It seems to work but sometimes the tuples seem compressed and sometime not.
I tried both with constant data and rand
It might shed some light if you put "echo" in front of that
to see what gets printed:
$ echo pg_dump -d postgres -t "\"Statuses\""
pg_dump -d postgres -t "Statuses"
regards, tom lane
--
Sent via pgsql-general mai
2016-12-15 16:23 GMT+01:00 Adrian Klaver :
> On 12/15/2016 07:17 AM, Tom DalPozzo wrote:
Hi,
>
> it's not clear to me when tuple data (max 1000 bytes total) get
>
> compressed on disk and when not.
>
> I tried with pg_column_size to find the tuple's size
>
> https://www.postgresql.org/docs/9.5/static/storage-toast.html
>>
>
> "The TOAST management code is triggered only when a row value to be stored
> in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The
> TOAST code will compress and/or move field values out-of-line until the r
Thomas Kellerer writes:
> Tom Lane schrieb am 15.12.2016 um 16:20:
>> Hmm. It might shed some light if you put "echo" in front of that
>> to see what gets printed:
>> $ echo pg_dump -d postgres -t "\"Statuses\""
>> pg_dump -d postgres -
that it will issue a checksum complaint
not a version-number complaint if started against a 9.4.x pg_control.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
trictions.
That doesn't seem to be your problem, at least not yet.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
>
> I see. But in my case rows don't reach that thresold (I didn't check if
>> 2K but I didn't change anything). So I'm wondering if there is any other
>> chance except the TOAST to get the rows compressed or not.
>>
>
> Are you really sure you want that? For small files the overhead of
> compressi
ou could use different pg_hba.conf files on master and slave. Or there's
always packet filtering...
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> Two questions:
>
> 1) Do you know what your data in the future is going to be?
>
> 2) Is a 100 byte bytea a realistic approximation of that data?
>
> wal_compression=off.
>>
>> --
> Adrian Klaver
> adrian.kla...@aklaver.com
Hi,
1) not precisely, but 10/100 million insertions per day. I can't k
nt library makes those accessible.
(In the spirit of full disclosure, I should point out that those
fields have only been provided since PG 9.3. So older installations
may not have the ability to do this in the Right Way. But still,
you should be evangelizing for doing it the Right Way, no?)
NG *;
Another option is COALESCE:
...
DO UPDATE SET campaigns = EXCLUDED.campaigns,
facts = COALESCE(fan.facts, '{}'::jsonb) || EXCLUDED.facts
...
I'd argue though that if you think this is okay, then you're abusing
NULL; that's supposed to mean "unknown", no
gres parser doesn't have any special knowledge about
the meaning of the -> and || operators, it gives them the same precedence,
causing what you wrote to be parsed as
((extra_values->'nested1') || extra_values)->'nested2'
giving the result you show. The COALES
two independent columns to select the rows you care about.
That would improve the odds of getting a decent rowcount estimate.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
h only this much info.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I was doing some tests with backup, replication, standby. After promoting a
standby server, I found my db in a condition that raises me an error while
trying to update a particular tuple.
Below here you can se my UPDATE statment and the error raised.
The select * from stato where id=409; execut
Hi,
> First I would find what base/16384/29153 actually is. So in the database
> where stato is:
>
> select relname from pg_class where relfilenode = 29153;
>
below here the query you suggested, showing that file belongs to stato
table as expected.
ginopino=# select relname from pg_class where r
u8=# select *, length(string) from unicode;
id | string | length
++
1 || 0
(1 row)
I'd have expected a syntax error along the line of "incomplete Unicode
surrogate pair". Peter, I think this was your code to begin with ---
was it inte
connect
anywhere, just emit the restore script on stdout. At least, that's
what happens for me. It's weird that you don't see any printout.
(To be clear: it's -d that triggers a connection attempt in pg_restore.
Without that, -h and -p are just noise.)
r
's operator precedence rules were set long before the JSON types ever
existed. Even if we wanted to treat -> specially, we couldn't for fear
of breaking existing queries that used custom operators named that.
regards, tom lane
--
Sent via pgsql-general mailing
>
> Is there an index on this table?
>>
>
> Have you tried a REINDEX on it?
>
> yes there is an index on id field. I tried REINDEX. Nothing changes but I
notice now (but perhaps it was like that even before reindexing) that every
time I issue that UPDATE query, the number of the block it can't read
re certainly missing the pg_depend
linkages, not to mention attribute inheritance counts in pg_attribute,
and there may be other things I'm not remembering offhand.
Why can't you use the normal ALTER TABLE approach?
regards, tom lane
--
Sent via pgsql-general mailing
James Zhou writes:
> The format U&'\03B1' only works for chars between -
Kyotaro-san already pointed you to the right answer on that:
you have to use "\+nn" for six-digit code points in the U&
string syntax.
regards, tom la
>
>
>>
> If it where me I would use one of the -X methods:
>
> https://www.postgresql.org/docs/9.5/static/app-pgbasebackup.html
>
>
>
>> To me that looks like an issue with the associated TOAST table. I do not
> have a suggestion at this time. Maybe this rings a bell with someone else.
>
> --
> Adr
2016-12-22 10:23 GMT+01:00 Brian Sutherland :
> Perhaps try 9.5.5 which has a fix for a problem with the same symptoms:
>
> https://wiki.postgresql.org/wiki/Free_Space_Map_Problems
> https://www.postgresql.org/docs/9.5/static/release-9-5-5.
> html#AEN126074
>
> Yes it was that!
I tried the
m-wide consistency standpoint, it's rather unfortunate that
inet's default conversion to text type does not behave the same as the
inet output function. But it's been like that for umpteen years and
the costs of breaking backwards compatibility would outweigh any benefit
of changing
isn't really the same thing.
Maybe, but we're not going to invent a whole new datatype just to
represent that. Perhaps you would be happier using the masklen()
function.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgre
Weather_Params' pkey, so it's enough to identify a
unique row of Weather_Params.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
; I would say because the '300 rows in sample' where spread out over
> all 2133350 pages.
Worth pointing out here is that you must have a custom statistics target
set on log_details to make it want a sample so much larger than the
default. If you feel ANALYZE is taking too lo
an INSERT command.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
at the benefit of a typed table is and when this would be useful?
AFAIK we implemented that only because it's in the SQL standard.
Otherwise you might as well use, for instance, CREATE TABLE ... LIKE.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-g
them into arrays, for example.
JSON might be a useful option, too.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
uld
start by seeing if you can identify the corrupt row(s).
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
but
that's not terribly helpful information for you.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I've got my primary and I make a pg_basebackup -x in order to create a
standby.
I can connect my standby only later, in some hours, so I'd like the master
to keep new WALs but I don't like to use archiving nor keep-segments
option. I thought to do it through a physical replication slot (my stan
2017-01-04 18:24 GMT+01:00 Adrian Klaver :
> On 01/04/2017 08:44 AM, Tom DalPozzo wrote:
>
>> Hi,
>>
>
> Postgres version?
>
> Because in 9.6:
>
> https://www.postgresql.org/docs/9.6/static/functions-admin.h
> tml#FUNCTIONS-REPLICATION
&g
Hi,
there is something happening in my replication that is not clear to me. I
think I'm missing something.
I've two server, red and blue.
red is primary blue is standby, async repl.
Now:
1 cleanly stop red
2 promote blue
3 insert tuples in blue
4 from red site, pg_rewind from blue to red dir.
5 sta
Job writes:
> Could you please help me?
There's advice here on how to ask this type of question with enough
detail to get answers:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql
>> is not helpful!
This should be a can't-happen failure ... it's not very clear to me how we
could produce a better message for it.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
2017-01-06 13:09 GMT+01:00 Michael Paquier :
> On Fri, Jan 6, 2017 at 1:01 AM, Tom DalPozzo wrote:
> > Hi,
> > there is something happening in my replication that is not clear to me. I
> > think I'm missing something.
> > I've two server, red and blue.
>
should get the same plan even when join_collapse_limit is
small.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Andreas Joseph Krogh writes:
> På lørdag 07. januar 2017 kl. 17:48:49, skrev Tom Lane <mailto:t...@sss.pgh.pa.us>>:
> If you've got just one problem query, it might be worth your time to take
> note of the optimal join order (as seen in EXPLAIN when a good plan is
>
i.postgresql.org/wiki/Slow_Query_Questions
it's hard to provide any concrete answers.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
e wasting
your time as well as the time of people who are trying to help you
by ignoring that advice about the information we need.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http:/
Hi, I need some clarifications about checkpoints.
Below here a log from my standby server when started and then some parts of
the interested WAL in the master's cluster obtained by pg_xlogdump.
Just to have an example to talk on.
1) I see: "LOG: redo starts at 1/F00A7448" . I was expecting a che
>
> https://www.postgresql.org/docs/9.5/static/wal-internals.html
>>
>
> "After a checkpoint has been made and the log flushed, the checkpoint's
> position is saved in the file pg_control. Therefore, at the start of
> recovery, the server first reads pg_control and then the checkpoint record;
> the
>
> Reread your original post and realized you where also asking about
>> transaction consistency and WALs. The thumbnail version is that Postgres
>> writes transactions to the WALs before they are written to the data files
>> on disk. A checkpoint represents a point in the sequence when is is know
>
> Whether any individual tuple in the data files is visible or not depends
> not only on the data itself, but also on the commit status of the
> transactions that created it (and deleted it, if any). Replaying WAL
> also updates the commit status of transactions, so if you're in the
> middle of
>
> > Hi,
> > so let's suppose that the WAL is:
> > LSN 10: start transaction 123
> > LSN 11: update tuple 100
> >checkpoint position here (not a record but just for understanding)
> > LSN 12: update tuple 100
> > LSN 13: update tuple 100
> > LSN 14: checkpoint record ( postion=11)
> > LSN 15:
>
> Hi, I had already read that doc but I can't answer clearly to my
>> questions 2,4 and 5.
>>
>
> The answer would seem to depend on what you consider 'a consistency state
> position'. Is it possible to be more explicit about what you mean?
>
>>
>> Hi, I meant a position such that, if you replay
>
> Could you give more details? What does pg_rewind tell you at each
>> phase? Is that on Postgres 9.5 or 9.6? I use pg_rewind quite
>> extensively on 9.5 but I have no problems of this time with multiple
>> timeline jumps when juggling between two nodes. Another thing that is
>> coming to my mind
501 - 600 of 14353 matches
Mail list logo