On Thu, Feb 23, 2006 at 02:01:47PM +0100, Kern Sibbald wrote:
> If you are still interested in this subject, I would make the following
> comments:
I'm still interested, this is actually pretty high on my list of things
to try.
>
> - I don't know what the COPY command is, but I doubt it is some
On Friday 10 February 2006 15:58, Karl Hakimian wrote:
> > I don't think COPY will be useful.
>
> I don't think I'm ready to give up on the copy command yet. The amount
> of data in the filename and path tables is small compared to the file
> table. If we created a copy command while updating the f
Quoting Karl Hakimian <[EMAIL PROTECTED]>:
On Fri, Feb 10, 2006 at 07:42:45PM +0100, Magnus Hagander wrote:
> Yes, it would be possible to commit the filename/path inserts
> immediately (i.e. 1 insert per transaction) but still do the
> file inserts within a larger transaction.
Not really, if
On Fri, Feb 10, 2006 at 07:42:45PM +0100, Magnus Hagander wrote:
> > Yes, it would be possible to commit the filename/path inserts
> > immediately (i.e. 1 insert per transaction) but still do the
> > file inserts within a larger transaction.
>
> Not really, if you want/need to refer to them fro
> > > I think transactions are more important here. We need to
> look more
> > > closely at that.
> >
> > Considering Bacula runs a *lot* of commands that are almost
> the same,
> > differeing only in data, it would probalby be a noticable
> gain using
> > prepared statements (that probably
On 10 Feb 2006 at 19:45, Magnus Hagander wrote:
>
> > > Cutting my postgres update time to minutes from hours would
> > certainly
> > > make my backups run far smoother.
> >
> > I think transactions are more important here. We need to
> > look more closely at that.
>
> Considering Bacula ru
> > Cutting my postgres update time to minutes from hours would
> certainly
> > make my backups run far smoother.
>
> I think transactions are more important here. We need to
> look more closely at that.
Considering Bacula runs a *lot* of commands that are almost the same,
differeing only in
> > Using two transactions, one for the vital components, the other for
> > non-vital portions.
> >
> > Or do we need to revisit how these tables are updated?
>
> Yes, it would be possible to commit the filename/path inserts
> immediately (i.e. 1 insert per transaction) but still do the
> file
> On Fri, 10 Feb 2006 07:23:22 -0800, Karl Hakimian <[EMAIL PROTECTED]>
> said:
>
> On Fri, Feb 10, 2006 at 03:12:24PM +, Martin Simmons wrote:
> > Because it was tied to multiple connections and this made it fatally broken
> > because of how the filename and path tables are updated.
> On Fri, 10 Feb 2006 07:28:23 -0800, Karl Hakimian <[EMAIL PROTECTED]>
> said:
>
> On Fri, Feb 10, 2006 at 07:22:10AM -0800, Karl Hakimian wrote:
> > On Fri, Feb 10, 2006 at 10:12:10AM -0500, Dan Langille wrote:
> > > If you have concurrent jobs, what happens when job #1 wants to add a
On 10 Feb 2006 at 16:00, Martin Simmons wrote:
> > On Fri, 10 Feb 2006 10:19:16 -0500, "Dan Langille" <[EMAIL PROTECTED]>
> > said:
> > Priority: normal
> > Content-description: Mail message body
> >
> > On 10 Feb 2006 at 15:12, Martin Simmons wrote:
> >
> > > > On Fri, 10 Feb 2006
> On Fri, 10 Feb 2006 10:19:16 -0500, "Dan Langille" <[EMAIL PROTECTED]>
> said:
> Priority: normal
> Content-description: Mail message body
>
> On 10 Feb 2006 at 15:12, Martin Simmons wrote:
>
> > > On Fri, 10 Feb 2006 06:58:56 -0800, Karl Hakimian <[EMAIL PROTECTED]>
> > > sai
On Fri, Feb 10, 2006 at 07:22:10AM -0800, Karl Hakimian wrote:
> On Fri, Feb 10, 2006 at 10:12:10AM -0500, Dan Langille wrote:
> > If you have concurrent jobs, what happens when job #1 wants to add a
> > file to the table, and job #2 wants to add the same file? How do you
> > ensure that both tr
On Fri, Feb 10, 2006 at 10:12:10AM -0500, Dan Langille wrote:
> If you have concurrent jobs, what happens when job #1 wants to add a
> file to the table, and job #2 wants to add the same file? How do you
> ensure that both transactions work without failure?
That looks like the biggest possible
On Fri, Feb 10, 2006 at 03:12:24PM +, Martin Simmons wrote:
> Because it was tied to multiple connections and this made it fatally broken
> because of how the filename and path tables are updated.
Could you elaborate on the problem? Multiple connections from where?
--
Karl Hakimian
[EMAIL PR
On 10 Feb 2006 at 15:12, Martin Simmons wrote:
> > On Fri, 10 Feb 2006 06:58:56 -0800, Karl Hakimian <[EMAIL PROTECTED]>
> > said:
> >
> > > I don't think COPY will be useful.
> >
> > I don't think I'm ready to give up on the copy command yet. The amount
> > of data in the filename and
> On Fri, 10 Feb 2006 06:58:56 -0800, Karl Hakimian <[EMAIL PROTECTED]>
> said:
>
> > I don't think COPY will be useful.
>
> I don't think I'm ready to give up on the copy command yet. The amount
> of data in the filename and path tables is small compared to the file
> table. If we creat
On 10 Feb 2006 at 6:58, Karl Hakimian wrote:
> > I don't think COPY will be useful.
>
> I don't think I'm ready to give up on the copy command yet. The amount
> of data in the filename and path tables is small compared to the file
> table. If we created a copy command while updating the file and
> I don't think COPY will be useful.
I don't think I'm ready to give up on the copy command yet. The amount
of data in the filename and path tables is small compared to the file
table. If we created a copy command while updating the file and path
tables and then dumped the file updates via copy, t
On 10 Feb 2006 at 5:53, [EMAIL PROTECTED] wrote:
> While de-spooling attributes into my postgres database for a full backup
> takes about two hours, I noticed that postgres was able to dump the
> entire database and create a brand new one including indexes in just
> a couple of minutes. It seems t
20 matches
Mail list logo