.
Perhaps we are trying to squeeze to much into "pg_last_xact_replay_timestamp()"
and a new function "pg_replication_timestamp()" is needed that would accurately
tell me a simple information: The time is was when the master db server was in
the exact same state as this replicat
On Wednesday 08 December 2010 21:58:46 you wrote:
> On Thu, Dec 9, 2010 at 1:37 AM, Gabi Julien wrote:
> > slave# /etc/init.d/postgresql start
> > slave# psql -hlocalhost my_db -c "select pg_last_xact_replay_timestamp(),
> > now() as not_modified_since;"
>
December 2010 11:37:51 Gabi Julien wrote:
> On Tuesday 07 December 2010 21:58:56 you wrote:
> > On Wed, Dec 8, 2010 at 1:31 AM, Gabi Julien
> > wrote:
> > > pg_last_xact_replay_timestamp() returns null when the server is restarted
> > > until a new transacti
On Tuesday 07 December 2010 21:58:56 you wrote:
> On Wed, Dec 8, 2010 at 1:31 AM, Gabi Julien wrote:
> > pg_last_xact_replay_timestamp() returns null when the server is restarted
> > until a new transaction is streamed to the hot standby server. It might
> > take a long tim
takes the value of
pg_last_xact_replay_timestamp() and save it on disk. If the value is null (the
server was restarted), we then read and return of last value stored on disk
instead. Is there any better way? Also, is there any plans to make
pg_last_xact_replay_timestamp() reliable even after a restart?
Thank you,
Gab
Thanks to all of you. This was very good feedback. I'll use the one cache per
process suggestion of Tom Lane. This will be the easiest to implement.
On Thursday 21 October 2010 11:14:40 A.M. wrote:
>
> On Oct 20, 2010, at 7:44 PM, Gabi Julien wrote:
>
> > Hi,
> >
postgresql processes. This would be a waste of space but it might be
better then nothing. In this case, do I need to make my code thread safe? In
other words, is postgresql using more then one thread per processes?
Any insights would be more then welcome!
Thank you,
Gabi Julien
--
Sent via pgsql
On Thursday 29 January 2009 02:43:18 you wrote:
> On Tue, 2009-01-27 at 12:53 -0500, Gabi Julien wrote:
> > I have merged the last hot standby patch (v9g) to 8.4 devel and I am
> > pleased with the experience. This is promising stuff.
>
> Thanks,
>
> > Perhaps i
On Wednesday 28 January 2009 18:35:18 Gabi Julien wrote:
> On Tuesday 27 January 2009 21:47:36 you wrote:
> > Hi,
> >
> > On Wed, Jan 28, 2009 at 4:28 AM, Gabi Julien
>
> wrote:
> > > Yes, the logs are shipped every minute but the recevory is 3 or 4 times
>
On Tuesday 27 January 2009 21:47:36 you wrote:
> Hi,
>
> On Wed, Jan 28, 2009 at 4:28 AM, Gabi Julien
wrote:
> > Yes, the logs are shipped every minute but the recevory is 3 or 4 times
> > longer.
>
> Are you disabling full_page_writes? It may slow down recovery seve
On Tuesday 27 January 2009 16:25:44 you wrote:
> On Tue, 2009-01-27 at 14:28 -0500, Gabi Julien wrote:
> > Could this help? If the logs are smaller then I could potentially afford
> > shipping then at a higher frequency.
>
> See if there are times during which the recover
On Tuesday 27 January 2009 13:13:32 you wrote:
> On Tue, 2009-01-27 at 12:53 -0500, Gabi Julien wrote:
> > I have merged the last hot standby patch (v9g) to 8.4 devel and I am
> > pleased with the experience. This is promising stuff. Perhaps it is a bit
> > too soon to ask qu
1. Speed of recovery
With a archive_timeout of 60 seconds, it can take about 4 minutes before I see
the reflected changes in the replica. This is normal since, in addition to
the WAL log shipping, it takes more time to do the recovery itself. Still, is
there any way besides the archive_timeout
I have merged the last hot standby patch (v9g) to 8.4 devel and I am pleased
with the experience. This is promising stuff. Perhaps it is a bit too soon to
ask questions here but here it is:
1. Speed of recovery
With a archive_timeout of 60 seconds, it can take about 4 minutes before I see
th
14 matches
Mail list logo