Adrien, thanks very much for answering my question. Just a couple of follow-up points, if you don't mind.
In our answer, you offer an example of pg_xact_commit_timestamp showing out-of-sequence commit times: Session xid pg_xact_commit_timestamp A 34386826 2019-07-11 09:32:38.994440+00 Started earlier, committed later B 34386827 2019-07-11 09:32:29.806183+00 I may not have asked my question clearly, or I may not understand the answer properly. Or both ;-) If I understand it correctly, an xid is assigned when a transaction starts. One transaction might take a second, another might take ten minutes. So, the xid sequence doesn't imply anything at all about commit sequence. What I'm trying to figure out is if it is possible for the commit timestamps to somehow be out of order. What I'm looking for is a way of finding changes committed since a specific moment. When the transaction started doesn't matter in my case. Is pg_xact_commit_timestamp suitable for this? I'm getting the impression that it isn't. But I don't understand quite how. And if it isn't suited to this purpose, does anyone know what pg_xact_commit_timestamp is for? What I'm after is something like a "xcommitserial" that increases reliably, and monotonically on transaction commit. That's how I'm hoping that pg_xact_commit_timestamp functions. Thanks also for making me understand that pg_xact_commit_timestamp applies to a *transaction*, not to each row. That makes it a lot lighter in the database. I was thinking 12 bytes+ per row, which is completely off for my case. (I tend to insert thousands of rows in a transaction.) > Yes timestamp are stored in pg_commit_ts directory. Old timestamp are removed after freeze has explained in > https://www.postgresql.org/docs/current/routine-vacuuming.html Thanks for the answer, and for kindly pointing me to the right section of the documentation. It's easy to get impatient with new(er) users. I'm _not_ lazy about reading manuls and researching but, well, the Postgres documentation is over 3,000 pages long when you download it. So, I may have missed a detail or two.... If I read that correctly, the ~4 billion number range is made into an endless circle by keeping ~2 billions numbers in the past, and 2 billion in the future. If that's right, I'm never going to be so out of data that the ~2 billion number window is too small.