On Fri, Oct 1, 2010 at 11:13, Fujii Masao wrote:
> On Wed, Sep 29, 2010 at 5:47 PM, Magnus Hagander wrote:
>> It's actually intentional. If we create a file at first, there is no
>> way to figure out exactly how far through a partial segment we are
>> without parsing the details of the log. This
On Wed, Sep 29, 2010 at 5:47 PM, Magnus Hagander wrote:
> It's actually intentional. If we create a file at first, there is no
> way to figure out exactly how far through a partial segment we are
> without parsing the details of the log. This is useful both for the
> admin (who can look at the dir
On Thu, Sep 30, 2010 at 17:25, Kevin Grittner
wrote:
> Aidan Van Dyk wrote:
>
>> When the "being written to" segmnt copmletes moves to the final
>> location, he'll get an extra whole "copy" of the file. But of the
>> "move" can be an exec of his scritpt, the compressed/gzipped final
>> result sh
Aidan Van Dyk wrote:
> When the "being written to" segmnt copmletes moves to the final
> location, he'll get an extra whole "copy" of the file. But of the
> "move" can be an exec of his scritpt, the compressed/gzipped final
> result shouldn't be that bad. Certainly no worse then what he's
> cu
Magnus Hagander wrote:
> We'd need a second script/command to call to figure out where to
> restart from in that case, no?
I see your point; I guess we would need that.
> It should be safe to just rsync the archive directory as it's
> being written by pg_streamrecv. Doesn't that give you the
On Thu, Sep 30, 2010 at 16:39, Aidan Van Dyk wrote:
> On Thu, Sep 30, 2010 at 10:24 AM, Magnus Hagander wrote:
>
>>> That would allow some nice options. I've been thinking what would
>>> be the ideal use of this with our backup scheme, and the best I've
>>> thought up would be that each WAL segm
On Thu, Sep 30, 2010 at 10:24 AM, Magnus Hagander wrote:
>> That would allow some nice options. I've been thinking what would
>> be the ideal use of this with our backup scheme, and the best I've
>> thought up would be that each WAL segment file would be a single
>> output stream, with the optio
On Thu, Sep 30, 2010 at 15:45, Kevin Grittner
wrote:
> Magnus Hagander wrote:
>
>>> If you could keep the development "friendly" to such features, I
>>> may get around to adding them to support our needs
>>
>> Would it be enough to have kind of an "archive_command" switch
>> that says "whenev
Magnus Hagander wrote:
>> If you could keep the development "friendly" to such features, I
>> may get around to adding them to support our needs
>
> Would it be enough to have kind of an "archive_command" switch
> that says "whenever you've finished a complete wal segment, run
> this comman
On Wed, Sep 29, 2010 at 23:45, Kevin Grittner
wrote:
> Magnus Hagander wrote:
>
>> Comments and contributions are most welcome.
>
> This is probably too esoteric to be worked on yet, but for this to
> be useful for us we would need to pass the resulting files through
> pg_clearxlogtail and gzip i
Magnus Hagander wrote:
> Comments and contributions are most welcome.
This is probably too esoteric to be worked on yet, but for this to
be useful for us we would need to pass the resulting files through
pg_clearxlogtail and gzip in an automated fashion. And we would
need to do regular log fi
On Wed, Sep 29, 2010 at 05:40, Fujii Masao wrote:
> On Tue, Sep 28, 2010 at 5:23 PM, Magnus Hagander wrote:
>>> When I ran that, the size of the WAL file in inprogress directory
>>> became more than 16MB. Obviously something isn't right.
>>
>> Wow, that's weird. I'm unable to reproduce that here
On Tue, Sep 28, 2010 at 5:23 PM, Magnus Hagander wrote:
>> When I ran that, the size of the WAL file in inprogress directory
>> became more than 16MB. Obviously something isn't right.
>
> Wow, that's weird. I'm unable to reproduce that here - can you try to
> figure out why that happened?
Sorry,
On Tue, Sep 28, 2010 at 06:25, Fujii Masao wrote:
> On Mon, Sep 27, 2010 at 9:07 PM, Magnus Hagander wrote:
>> As has been previously mentioned a couple of times, it should be
>> perfectly possible to use streaming replication to get around the
>> limitations of archive_command/archive_timeout to
On Mon, Sep 27, 2010 at 9:07 PM, Magnus Hagander wrote:
> As has been previously mentioned a couple of times, it should be
> perfectly possible to use streaming replication to get around the
> limitations of archive_command/archive_timeout to do log archiving for
> PITR (being that you either keep
15 matches
Mail list logo