Hello,
thanks for your fast response.
Question is answered, problem should be solved - perfect!
Best regards,
Philipp
Am 08.02.2017 um 14:05 schrieb Kern Sibbald:
> Hello,
>
> I confirm that the Allow Duplicate Jobs patch is and has been for some
> time in Bacula.
>
> Best regards,
> Kern
>
>
Hello,
I confirm that the Allow Duplicate Jobs patch is and has been for some
time in Bacula.
Best regards,
Kern
On 02/08/2017 01:36 PM, Thomas Lohman wrote:
>> One of the queued backups is the next incremental backup of "archive".
>> My expectation was that the incremental backup would run onl
> One of the queued backups is the next incremental backup of "archive".
> My expectation was that the incremental backup would run only some hours
> after the full backup finishes, so the difference is really small and it
> only takes some minutes and only requires a small amount of tape
> storage
15.12.2011 15:04, Konstantin Khomoutov пишет:
> On Thu, 15 Dec 2011 13:22:41 +0400
> Vladimir Vassiliev wrote:
>
>> If I restore only files from one certain incremental job, attributes
>> of some directories are wrongly restored, as I understand it's
>> because of these directories was not backed
On Thu, 15 Dec 2011 13:22:41 +0400
Vladimir Vassiliev wrote:
> If I restore only files from one certain incremental job, attributes
> of some directories are wrongly restored, as I understand it's
> because of these directories was not backed up in this job but only
> files somewhere in depth, so
Le samedi 15 janvier 2011 15:27:48, Bart Swedrowski a écrit :
> On 15 January 2011 14:12, Eric Bollengier
>
> > wrote:
> >
> > It sounds to be a bug when the FileDaemon is computing the checksum of
> > the file, it updates the Bytes Written counter when it shouldn't.
> >
> > Looks trivial to fi
On 15 January 2011 14:12, Eric Bollengier wrote:
> It sounds to be a bug when the FileDaemon is computing the checksum of the
> file, it updates the Bytes Written counter when it shouldn't.
>
> Looks trivial to fix, but I need some time to test the patch
That's interesting. Would you like me t
Hello,
> Now the bit that is particularly interesting to me is:
>
> * FD Bytes Written: 40,119,463,364 (40.11 GB)*
> * SD Bytes Written: 256,785,265 (256.7 MB) *
>
> Nothing has been written to the FD. FD was being read during the backup
> time only. And the amount shown as "SD B
On 14 January 2011 20:18, Martin Simmons wrote:
> It sounds like you have some large files which compress a lot.
>
Nah, I don't think that is the case. I know what are those files and those
are mainly small, tiny files like emails, small log files.
Have a look at below's output.
*14-Jan 02:38
> On Fri, 14 Jan 2011 09:23:37 +, Bart Swedrowski said:
>
> 2011/1/13 Mark :
> > Have you done a 'list files jobid=' for one of your incrementals?
> > Maybe you have a few really large files that are getting changed every day,
> > and therefore getting backed up each day.
>
> Yeah, I tri
On 14 January 2011 09:23, Bart Swedrowski wrote:
> Also, it's Bacula 5.0.3-2 re-compiled from sources provided on www.bacula.org.
Sorry - that is Bacula 5.0.3-1 re-compiled from sources on www.bacula.org.
--
Protect Your
2011/1/13 Mark :
> Have you done a 'list files jobid=' for one of your incrementals?
> Maybe you have a few really large files that are getting changed every day,
> and therefore getting backed up each day.
Yeah, I tried that, too. It's only listing files that got changed/are
new and should be b
another solution... though not quite the best...
create a pool and jobs specific to the PST file.. set the retention on the pool
to be say 7 days.. That way you can backup/restore the pst file separately and
not effect the backup of the rest of your system.
Obviously you still have to send over
On Jan 13, 2011, at 3:44 PM, Lawrence Strydom wrote:
> I understand that something is adding data and logically the backup should
> grow. What I don't understand is why the entire file has to be backed up if
> only a few bytes of data has changed. It is mainly outlook.pst files and
> MSSQL data
Thanks for the clear answer Paul.
Seems like I will have to enable Acurate and buy more disks.
On 13 January 2011 23:27, Paul Mather wrote:
> On Jan 13, 2011, at 3:44 PM, Lawrence Strydom wrote:
>
> > I understand that something is adding data and logically the backup
> should grow. What I do
2011/1/13 Lawrence Strydom
> Hi And thanks for all the replies so far.
>
> I'm running Bacula 5.0.3 on OpenSuSE 11.3. Self compiled with the following
> configure options:
>
> * --enable-smartalloc --sbindir=/usr/local/bacula/bin
> --sysconfdir=/usr/local/bacula/bin -with-mysql -with-openssl -ena
Sorry Bacula is not that clever..indeed it's just checking for files which
changes.. It's not able to determine how the file changed, or just back up
those bits which changed.
---Guy
Sent from my iPad
On 13 Jan 2011, at 20:44, Lawrence Strydom wrote:
> Hi And thanks for all the replies so fa
Hi And thanks for all the replies so far.
I'm running Bacula 5.0.3 on OpenSuSE 11.3. Self compiled with the following
configure options:
* --enable-smartalloc --sbindir=/usr/local/bacula/bin
--sysconfdir=/usr/local/bacula/bin -with-mysql -with-openssl -enable-bat
-sysconfdir=/etc/bacula -enable-t
On Thu, Jan 13, 2011 at 4:42 AM, Bart Swedrowski wrote:
>
> I think what Lawrence meant was that say full backup takes 33GB, as
> the one below.
>
> | 1,089 | tic FS | 2011-01-08 02:05:03 | B| F |
> 464,798 | 33,390,404,320 | T |
>
> Now, if you do Incremental backup, it'
> First there's something adding data everyday, so that's why there's more and
> more data.
>
I hope you put a limit on the file size or usage duration so that this
volume does not grow until it fills up the disk. Remember / retention
does not work until the volume is marked Full or used and for
On 01/13/2011 11:42 AM, Bart Swedrowski wrote:
> 2011/1/12 Kleber Leal
>> Yes. The entire file is backed up again when gets modification.
>> Incremental backups include all modified files since last backup (Full,
>> Incremental ou differential). Incremental and differential are file based.
>> if
2011/1/12 Lawrence Strydom :
> This leads me
> to believe that the entire file is being backed up instead of only the
> changed data which is my understanding of a differential backup.
The only program I know that work in that way is rdiff-backup.
It's very efficent in saving sapce but you do now
2011/1/12 Kleber Leal
> Yes. The entire file is backed up again when gets modification.
> Incremental backups include all modified files since last backup (Full,
> Incremental ou differential). Incremental and differential are file based.
> if you have a 100GB file and this was modified, it will
Yes. The entire file is backed up again when gets modification.
Incremental backups include all modified files since last backup (Full,
Incremental ou differential). Incremental and differential are file based.
if you have a 100GB file and this was modified, it will be backed up and
will use this s
Jordi
- Missatge Original -
De: "Ryan Novosielski"
A: bacula-users@lists.sourceforge.net
Enviat: Dimecres, 16 de Juny 16el 2010 16:25:58
Assumpte: Re: [Bacula-users] Incremental backups always full size
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm not certain. I'm
,
> right?
>
> - Missatge Original -
> De: "Ryan Novosielski"
> A: "Jordi Augé"
> Cc: bacula-users@lists.sourceforge.net
> Enviat: Dimecres, 16 de Juny 16el 2010 15:21:18
> Assumpte: Re: [Bacula-users] Incremental backups always full size
>
the same Bacula relies on, right?
- Missatge Original -
De: "Ryan Novosielski"
A: "Jordi Augé"
Cc: bacula-users@lists.sourceforge.net
Enviat: Dimecres, 16 de Juny 16el 2010 15:21:18
Assumpte: Re: [Bacula-users] Incremental backups always full size
-BEGIN PGP SIG
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jordi Augé wrote:
> Hello,
>
> I am having a problem with a Bacula 5.0.1 installation. It's an Ubuntu 10.04
> server, and Bacula's been installed from the Ubuntu repositories.
>
> I'm backing up about 165 GB of data onto files and I'd like to make t
If I understand well..
you can restore single file, ther's no problem on this, the problem is
if the file is into your backup!!
If you receive the mail an tuesday and you delete it on thursday,
obviusly you can't find it in monday backup
You have to do a full or differential backup every weekend, a
On Tuesday 02 June 2009 22:15:47 Martin Simmons wrote:
> > On Tue, 2 Jun 2009 10:54:24 +0300, Silver Salonen said:
> >
> > Hello.
> >
> > Currently I see this behavior in Bacula 3.0.0 (on FreeBSD):
> >
> > I backup a directory (having 6 files), then remove the original directory
and
> > re
> On Tue, 2 Jun 2009 10:54:24 +0300, Silver Salonen said:
>
> Hello.
>
> Currently I see this behavior in Bacula 3.0.0 (on FreeBSD):
>
> I backup a directory (having 6 files), then remove the original directory and
> restore it from backup. When I run an incremental backup of the same
> di
On 18.02.2009, at 22:08, Arno Lehmann wrote:
>> Is there any way to tell bacula it should backup all new files (new
>> meaning "not already backed up") within this directory, regardless of
>> the timestamp, without doing a full backup?
>>
> Solution one: Wait for 3.0, and / or start testing the cu
On 19.02.2009, at 14:50, Jari Fredriksson wrote:
>> Is there any way to tell bacula it should backup all new
>> files (new meaning "not already backed up") within this
>> directory, regardless of the timestamp, without doing a
>> full backup?
> Exclude the /archive from your normal backup job file
> For special data we have a backup scheme that does not
> really fit bacula's idea of incremental backups:
>
> There is a directory, say /archive, that is empty by
> default. If something needs to be backed up by bacula, it
> is copied (or moved) into this directory. Then the backup
> job is star
Hi,
18.02.2009 14:55, Sebastian Stark wrote:
> For special data we have a backup scheme that does not really fit
> bacula's idea of incremental backups:
>
> There is a directory, say /archive, that is empty by default. If
> something needs to be backed up by bacula, it is copied (or moved)
yup, you were both right
[EMAIL PROTECTED]:~/backup$ stat bkw_home_and_etc_venus30Jun07.tar.gz
File: `bkw_home_and_etc_venus30Jun07.tar.gz'
Size: 724985425 Blocks: 1417384IO Block: 4096 regular file
Device: 807h/2055d Inode: 9650180 Links: 1
Access: (0750/-rwxr-x---) Uid:
Hi,
28.08.2007 23:37,, Bachman Kharazmi wrote::
> Hi
> I've had problem with daily incremental backups which does seem to
> backup unchanged files.
> The size of my increment jobs are 1G/day which started to worry me.
Make sure the files are *really* unchanged. Under unix/linux, 'stat
filename'
On 8/28/07, Bachman Kharazmi <[EMAIL PROTECTED]> wrote:
> Hi
> I've had problem with daily incremental backups which does seem to
> backup unchanged files.
> The size of my increment jobs are 1G/day which started to worry me.
>
> Please see output of job here:
> http://pastebin.ca/674018
>
> And co
On Thu, 19 May 2005, Arno Lehmann wrote:
As far as I know, NTFS has similar timestamps - atime, mtime and ctime - as
normal unix file systems. I'm not sure, but I think I remember reading
somewhere that under Windows you can avoid changing them when you modify a
file.
There are more attributes t
> On Thu, 19 May 2005 02:02:37 +0200, Arno Lehmann <[EMAIL PROTECTED]> said:
Arno> Ryan LeBlanc wrote:
>> Arno, thank you for your response.
>>
>> Here are our details:
>>
>> Bacula version 1.36.3 server running on Linux kernel 2.4.26. It has
>> ext2 partitions mounted (rw)
Ryan LeBlanc wrote:
Arno, thank you for your response.
Here are our details:
Bacula version 1.36.3 server running on Linux kernel 2.4.26. It has
ext2 partitions mounted (rw)
Ok, the server doesn't matter here, I think.
The client is running Windows XP, no special mount options, just windows
defaul
Arno, thank you for your response.
Here are our details:
Bacula version 1.36.3 server running on Linux kernel 2.4.26. It has
ext2 partitions mounted (rw)
The client is running Windows XP, no special mount options, just windows
default. NTFS format on the partition
Arno Lehmann wrote:
> Hell
Hello,
Ryan LeBlanc wrote:
We are running tests with Bacula to see if it will work in our
environment. So far, we are very impressed!
We have, however, run into a small problem. We do a full backup of a
folder, and all files are copied as expected. We then put a file into
this folder. It, howev
43 matches
Mail list logo