On Thu, 2013-08-08 at 17:36 -0400, John Drescher wrote:
> You will have to also clean up your volumes.
I'm planning on deleting all the jobs, then purging all the volumes. If
my understanding of what a purge does is correct, then the data that is
currently written to the volumes (which in my case
> So what's the easiest way to clear everything out so I can start over?
> Drop the database and reload it from the bacula.sql script?
>
You will have to also clean up your volumes.
John
--
Get 100% visibility into Java/.
On Thu, Aug 8, 2013 at 5:23 PM, Greg Woods wrote:
>
> OK, I think I have it figured out, and it's not pretty. I suspect that
> most of the backups I have already done are useless and I will have to
> repeat all of the Full backups and start all over again.
>
> The answer came when I tried to do a
Hello everybody,
I have a very simple problem, but at the moment, I can't see the solution.
There is no log in /var/log/bacula/.
The important parts of the directory configuration:
Messages {
Name = Standard
mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r
OK, I think I have it figured out, and it's not pretty. I suspect that
most of the backups I have already done are useless and I will have to
repeat all of the Full backups and start all over again.
The answer came when I tried to do a restore. I thought, OK, if I messed
up the director configur
On Thu, 2013-08-08 at 14:39 -0400, John Drescher wrote:
> Make sure the times of the folders (modified and ctime) also any
> attributes are not changing between backups.
I suppose it's good to check the obvious and the stupid first, but I
don't think that's it. I don't know how to check to see if
On Thu, 2013-08-08 at 12:12 -0600, Greg Woods wrote:
> On Thu, 2013-08-08 at 11:08 +0100, Martin Simmons wrote:
> > > On Wed, 07 Aug 2013 19:11:33 -0600, Greg Woods said:
>
> > > |13 | anathem | 2013-08-02 20:37:18 | B| F | 800,022
> > > | 137,247,853,895 | T |
> > >
On Thu, Aug 8, 2013 at 2:39 PM, John Drescher wrote:
> Make sure the times of the folders (modified and ctime) also any
> attributes are not changing between backups. I don't think the problem
> has anything to do with your configuration files.
>
> "The File daemon (Client) decides which files to
Make sure the times of the folders (modified and ctime) also any
attributes are not changing between backups. I don't think the problem
has anything to do with your configuration files.
"The File daemon (Client) decides which files to backup for an
Incremental backup by comparing start time
of the
> Password =
Well, that was stupid of me. I am now going to have to change my
password in all of the &*#()@! bacula-fd.conf files. Boot to the head
:-)
--Greg
--
Get 100% visibility into Java/.NET code with AppDy
On Thu, 2013-08-08 at 11:08 +0100, Martin Simmons wrote:
> > On Wed, 07 Aug 2013 19:11:33 -0600, Greg Woods said:
> > |13 | anathem | 2013-08-02 20:37:18 | B| F | 800,022
> > | 137,247,853,895 | T |
> > |43 | anathem | 2013-08-07 14:32:19 | B| I |
I have two LTO5 tape drives. In the bacula-sd.conf file I have the spool size
set-up as follows for both drives:
Maximum File Size = 10G
Maximum Network Buffer Size = 65536
Maximum Block Size = 262144
maximum spool size = 900 G
spool directory = /tower4/bacula_spool
I am having trouble ba
I am trying to do a full backup of a RHEL 5 server over a 1GB network it has
10TB of data and it is failing while despooiling. Sometimes the first set of
spooled data de-spools fine but subsequent spooled data fails to despool. I
have Heartbeat Interval = 120" both on the server/client config
> On Wed, 07 Aug 2013 19:11:33 -0600, Greg Woods said:
>
> I'm a new Bacula user, having just set up a system for backing up the
> machines in my house (the storage server is a Raspberry Pi with a 4TB
> external disk drive attached to it).
>
> My question concerns backup levels. This afternoo
Zitat von azurIt :
> Hi,
>
> i'm having some MySQL performance difficulties so i started to
> search what can i do better. My table 'File' had these indexes
> created:
> CREATE INDEX file_jobid_idx on File (JobId);
> CREATE INDEX file_jpf_idx on File (JobId, PathId, FilenameId);
>
> Which loo
Hi,
i'm having some MySQL performance difficulties so i started to search what can
i do better. My table 'File' had these indexes created:
CREATE INDEX file_jobid_idx on File (JobId);
CREATE INDEX file_jpf_idx on File (JobId, PathId, FilenameId);
Which looks correct according to documentation:
h
16 matches
Mail list logo