> Hello people,
>
> I'm using Bacula-win v. 2.4.3 and everything was working fine;
suddenly
> Bacula stops to make backup of files from the clients. The message is:
> Cannot open "Path\File..." ERR: Access is denied
>
> The backups are beeing stored on files (I'm not using tapes).
>
> The curiou
Hello people,
I'm using Bacula-win v. 2.4.3 and everything was working fine; suddenly Bacula stops to make backup of files from the clients. The message is: Cannot open "Path\File..." ERR: Access is denied
The backups are beeing stored on files (I'm not using tapes).
The curious thing is that anyth
> On Thu, 27 Nov 2008 17:19:21 +, Allan Black said:
>
> Hi, all,
>
> Can anyone help me analyse the problem here? On more than
> one occasion, a DDS3 drive has produced this, when it reaches
> the end of a tape:
>
> 23-Nov 22:03 gershwin-dir JobId 506: Using Device "DDS3-0"
> 23-Nov 22:0
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
bacula wrote:
> Hi,
>
> i setup a new bacula installation and only thing what isnt working yet
> is the make_catalog_backup script (im using sqlite3). I use the original
> script which comes with the Ubuntu packages.
>
> heres the output of ls -al
Daniel Betz wrote:
> Hi!
>
> I have the same problem with large amount of files on one filesystem (
> Maildir ).
> Now i have 2 concurrent jobs running and the time for the backups need half
> the time.
> I havent tested 4 concurrent jobs jet .. :-)
That would be a really nice feature to have
Tobias Bartel wrote:
>> Even with 800,000 files, that sounds very slow. How much data is
>> involved, how is it stored and how fast is your database server?
>
> It's about 70GB of data, stored on a Raid5 (3Ware controller).
>
> The database is a SQLite one, on the same machine but on a Software
Hi,
27.11.2008 18:04, Kelly, Brian wrote:
> Arno,
>
> I've been studying different options for improving my database performance.
>
> After reading your recommendations on improving catalog performance.
>
> "There are two rather simple solutions:
> - Don't keep file information for this job i
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tobias Bartel wrote:
> Hello,
>
>> Even with 800,000 files, that sounds very slow. How much data is
>> involved, how is it stored and how fast is your database server?
>
> It's about 70GB of data, stored on a Raid5 (3Ware controller).
>
> The datab
Kjetil Torgrim Homme wrote:
> Jesper Krogh <[EMAIL PROTECTED]> writes:
>
>> Can you give us the time for doing a tar to /dev/null of the fileset.
>>
>> time tar cf /dev/null /path/to/maildir
>>
>> Then we have a feeling about the "actual read time" for the file of
>> the filesystem.
>
> if you're
Hi, all,
Can anyone help me analyse the problem here? On more than
one occasion, a DDS3 drive has produced this, when it reaches
the end of a tape:
23-Nov 22:03 gershwin-dir JobId 506: Using Device "DDS3-0"
23-Nov 22:06 gershwin-sd JobId 506: End of Volume "MainCatalog-004" at 57:4963
on device
Hi!
I have the same problem with large amount of files on one filesystem ( Maildir
).
Now i have 2 concurrent jobs running and the time for the backups need half the
time.
I havent tested 4 concurrent jobs jet .. :-)
Greetings,
Daniel Betz
Platform Engineer
__
Tobias Bartel wrote:
> Hello,
>
> i am tasked to set up daily full backups of our entire fax communication
> and they are all stored in one single director ;). There are about
> 800.000 files in that directory what makes accessing that directory
> extremely slow. The target device is a LTO3 tape d
Hello,
> Even with 800,000 files, that sounds very slow. How much data is
> involved, how is it stored and how fast is your database server?
It's about 70GB of data, stored on a Raid5 (3Ware controller).
The database is a SQLite one, on the same machine but on a Software
Raid 1.
The backup de
Alan Brown <[EMAIL PROTECTED]> writes:
> Ext3 will perform a lot better if you use tune2fs and enable the
> following features:
>
> dir_index
> Use hashed b-trees to speed up lookups in large
> directories.
this may be good for Maildir, but with Cyrus IMAPD, whic
Arno,
I've been studying different options for improving my database performance.
After reading your recommendations on improving catalog performance.
"There are two rather simple solutions:
- Don't keep file information for this job in the catalog. This makes
restoring single mails difficult
Hi,
27.11.2008 14:15, Boris Kunstleben onOffice Software GmbH wrote:
...
> @Arno
> i'm not that good in mysql, but i already tuned mysql und set some new
> indexes, see below:
> mysql> show index from File;
Actually, my suggestion was to *remove* indexes; updating an index
when adding new data
Hi,
27.11.2008 17:10, Tobias Bartel wrote:
> Hello,
>
> i am tasked to set up daily full backups of our entire fax communication
> and they are all stored in one single director ;). There are about
> 800.000 files in that directory what makes accessing that directory
> extremely slow. The target
Jesper Krogh <[EMAIL PROTECTED]> writes:
> Can you give us the time for doing a tar to /dev/null of the fileset.
>
> time tar cf /dev/null /path/to/maildir
>
> Then we have a feeling about the "actual read time" for the file of
> the filesystem.
if you're using GNU tar, it will *not* read the fil
Hi,
27.11.2008 15:10, Pasi Kärkkäinen wrote:
> On Thu, Nov 27, 2008 at 08:14:45AM +0100, Arno Lehmann wrote:
>> Hi,
>>
>> 26.11.2008 21:22, Bob Hetzel wrote:
>>> I've got bacula currently in a hung state with the following interesting
>>> info. When I run a status storage produces the following.
Tobias Bartel wrote:
> Hello,
>
> i am tasked to set up daily full backups of our entire fax communication
> and they are all stored in one single director ;). There are about
> 800.000 files in that directory what makes accessing that directory
> extremely slow. The target device is a LTO3 tape d
Hello,
i am tasked to set up daily full backups of our entire fax communication
and they are all stored in one single director ;). There are about
800.000 files in that directory what makes accessing that directory
extremely slow. The target device is a LTO3 tape drive with an 8 slots
changer.
Wi
Hi Alan,
i did that already (expect the superblocks). I think it the ext3 itself in
combination with the Virtual machines.
THX so far
Boris Kunstleben
--
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel
Hi Mike,
thanks dor the advice. That was my thought to. I'm already designing a new
mails architecture, i'll change the filesystem then.
Kind regards
Boris Kunstleben
--
--
onOffice Software GmbH
Feldstr. 40
What kind of security are you looking for? Generally, the fastest
"encryption" if you only have minimal security needs and speed is your
overriding concern is probably XOR. Otherwise, I would probably go with PGP.
Lukasz Szybalski wrote:
> Hello,
>
> I was wondering if somebody could suggest a f
On Thursday 27 November 2008 16:29:50 you wrote:
> Silver Salonen wrote:
> > And when you have many incrementals in a row while restoring, you end up
> > seeing many duplicate messages, that have been deleted or moved during
these
> > incrementals.
> >
>
> For such case use snapshots - freez
On Thu, 27 Nov 2008, Boris Kunstleben onOffice Software GmbH wrote:
> any idea if there is a better filesystem, im using ext3 on the clients
> and xfs on the director
I believe XFS copes fine with overstuffed directories.
Ext3 will perform a lot better if you use tune2fs and enable the following
On Thu, 27 Nov 2008, Willians Vivanco wrote:
> > What are the permissions of /dev/nst* and what userid are the programs
> > running as?
> >
> The normal permissions root:tape
Bacula-sd and btape usually run as bacula:tape - as a test set the
/dev/nst devices world writeable and see if the error p
Hi,
i setup a new bacula installation and only thing what isnt working yet
is the make_catalog_backup script (im using sqlite3). I use the original
script which comes with the Ubuntu packages.
heres the output of ls -al in /var/lib/bacula:
drwx-- 2 bacula bacula 4096 2008-11-27 15:03 .
d
On Thu, Nov 27, 2008 at 08:14:45AM +0100, Arno Lehmann wrote:
> Hi,
>
> 26.11.2008 21:22, Bob Hetzel wrote:
> > I've got bacula currently in a hung state with the following interesting
> > info. When I run a status storage produces the following...
>
> Is your Bacula still stuck? If so, and you
Kevin Keane wrote:
> You are using a very old version of bacula! Maybe you can find a version
> for your Linux distribution that is more current? I believe 2.4.3 is the
> current one.
>
> Boris Kunstleben onOffice Software GmbH wrote:
>> Hi,
>>
>> know i got all the necessary Information (bacula
Boris Kunstleben onOffice Software GmbH wrote:
> any idea if there is a better filesystem, im using ext3 on the clients and
> xfs on the director
ext3 is possibly not a good fs for a Maildir. Can't offer you any personal
accounts, but I was looking at Google for something else regarding
filesystem
You are using a very old version of bacula! Maybe you can find a version
for your Linux distribution that is more current? I believe 2.4.3 is the
current one.
Boris Kunstleben onOffice Software GmbH wrote:
> Hi,
>
> know i got all the necessary Information (bacula-director Version 1.38.11-8):
>
Hi Alan,
any idea if there is a better filesystem, im using ext3 on the clients and xfs
on the director
Kind Regards Boris Kunstleben
--
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel. +49 (0)241 4468
Hi,
know i got all the necessary Information (bacula-director Version 1.38.11-8):
@Jesper (the timed tar)
server:~# time tar cf /dev/null /home/mailer4/
tar: Removing leading `/' from member names
real96m25.390s
user0m18.644s
sys 0m57.260s
@Arno
i'm not that good in mysql, but i alre
If you are using the Label Format attribute to assign volume, bacula
normally appends the media ID. However, if you use variable expansion to
generate the name, it doesn't - and I don't see a way to use variable
expansion to add it again. I don't see it among the documented
variables; is there
On Wed, 26 Nov 2008, Willians Vivanco wrote:
> Thanks for the information, but... Some specific action i do to make Bacula
> recognize more than 64Kb in my tapes? I'm really confused and my work is
> completely stopped for that reason.
Bacula just talks to the os generic tape interface.
What are
On Thu, 27 Nov 2008, Boris Kunstleben onOffice Software GmbH wrote:
> i am doing exactly that since last Thursday.
> I have about 1.6TB in Maildirs and an huge number of small files. I have to
> say it is awfull slow. Backing up a directory with about 190GB of Maildirs
> took "Elapsed time: 1 da
Personal Técnico wrote:
> Hi!
>
> I have configured these 2 pools for backup a server:
>
> Pool {
> Name = Incremental
> Label Format = "Server-Incr"
> Pool Type = Backup
> Recycle = yes
> AutoPrune = yes
> Storage = BackupRAID5
>
David Jurke wrote:
> The DBAs are already talking about partitioning and making the older
> tablespaces read-only and only backing them up weekly or fortnightly or
> whatever, which solves the problem for the daily backups but still leaves
> us with a weekly/fortnightly backup which won't fit in th
take a look at
http://www.oracle.com/technology/deploy/availability/pdf/oracle-openworld-2007/S291487_1_Chien.pdf
Backup and Recovery Best Practices for Very Large Databases (VLDBs)
Regards
D.
-
This SF.Net email is sponso
Hi,
27.11.2008 12:15, James Cort wrote:
> Arno Lehmann wrote:
>> - Tune your catalog database for faster inserts. That can mean moving
>> it to a faster machine, assigning more memory for it, or dropping some
>> indexes (during inserts). If you're not yet using batch inserts, try
>> to recompil
Arno Lehmann wrote:
> - Tune your catalog database for faster inserts. That can mean moving
> it to a faster machine, assigning more memory for it, or dropping some
> indexes (during inserts). If you're not yet using batch inserts, try
> to recompile Bacula with batch-inserts enabled.
Is that w
Hi!
I have configured these 2 pools for backup a server:
Pool {
Name = Incremental
Label Format = "Server-Incr"
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = BackupRAID5
Volume Use Duration = 7 days
2008/11/27 David Jurke <[EMAIL PROTECTED]>
> Whoa!
>
>
>
> Okay, I need to go talk to the DBAs about this lot, lots of it is too far
> on the DBA side for me to comment intelligently on it. It does sound
> promising, though - if we back up daily only the current month's data (the
> rest will be i
Hi,
27.11.2008 11:31, Boris Kunstleben wrote:
> Hi,
>
> i am doing exactly that since last Thursday.
> I have about 1.6TB in Maildirs and an huge number of small files. I have to
> say it is awfull slow. Backing up a directory with about 190GB of Maildirs
> took "Elapsed time: 1 day 14 hours 4
Boris Kunstleben onOffice Software GmbH wrote:
> Hi,
>
> i am doing exactly that since last Thursday.
> I have about 1.6TB in Maildirs and an huge number of small files. I have to
> say it is awfull slow. Backing up a directory with about 190GB of Maildirs
> took "Elapsed time: 1 day 14 hours 4
Hi,
i am doing exactly that since last Thursday.
I have about 1.6TB in Maildirs and an huge number of small files. I have to say
it is awfull slow. Backing up a directory with about 190GB of Maildirs took
"Elapsed time: 1 day 14 hours 49 mins 34 secs".
On the other hand i have a server with Doc
Hi,
i am doing exactly that since last Thursday.
I have about 1.6TB in Maildirs and an huge number of small files. I have to say
it is awfull slow. Backing up a directory with about 190GB of Maildirs took
"Elapsed time: 1 day 14 hours 49 mins 34 secs".
On the other hand i have a server with Doc
Hi,
i am doing exactly that since last Thursday.
I have about 1.6TB in Maildirs and an huge number of small files. I have to say
it is awfull slow. Backing up a directory with about 190GB of Maildirs took
"Elapsed time: 1 day 14 hours 49 mins 34 secs".
On the other hand i have a server with Doc
On Thursday 27 November 2008 11:07:41 James Cort wrote:
> Silver Salonen wrote:
> > On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
> >> Hello all!
> >>
> >> Soon I will deploy a large email server - it will use maildirs and will
> >> be about 1Tb of mail with really many small files
Silver Salonen wrote:
> On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
>> Hello all!
>>
>> Soon I will deploy a large email server - it will use maildirs and will
>> be about 1Tb of mail with really many small files.
>>
>> It is any hints to make a backup via bacula of this?
>>
> I
On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
> Hello all!
>
> Soon I will deploy a large email server - it will use maildirs and will
> be about 1Tb of mail with really many small files.
>
> It is any hints to make a backup via bacula of this?
>
>
> --
> Best regards,
> Prosk
Hello all!
Soon I will deploy a large email server - it will use maildirs and will
be about 1Tb of mail with really many small files.
It is any hints to make a backup via bacula of this?
--
Best regards,
Proskurin Kirill
---
53 matches
Mail list logo