On 1/24/25 14:50, Phil Stracchino wrote:
On 1/24/25 14:43, Josh Fisher via Bacula-users wrote:
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
On 1/24/25 10:17 AM, Phil Stracchino wrote:
this huge atomic write *also* makes it incompatible with Galera 3
clusters.)
Are you sure about
Mandi! Roberto Greiner
In chel di` si favelave...
> Any ideas of what could cause this issue?
Sorry Roberto, it is not clear to me.
ZFS is the 'destination' of the backup, or the source? What is the backup
media?
I can confirm i have still trouble with ZFS as a SOURCE of backup: seems
that Z
On 1/24/25 14:43, Josh Fisher via Bacula-users wrote:
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
On 1/24/25 10:17 AM, Phil Stracchino wrote:
this huge atomic write *also* makes it incompatible with Galera 3
clusters.)
Are you sure about that? The only thing attribute spooling is
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
On 1/24/25 10:17 AM, Phil Stracchino wrote:
Just FYI I am very unconvinced of the benefit of attribute spooling, at
least with a MySQL back-end, because the spooling implementation in
Bacula's MySQL driver is very bad.
(It writes attrib
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
I will let a developer comment on this as it is "Above my pay grade"™, but I
can say that I have seen horrific backup
performance with attribute spooling disabled. It has been very many years since
I last even tried, but apparently it is n
Em 24/01/2025 14:11, Phil Stracchino escreveu:
On 1/24/25 10:36, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one example, I made a (fu
On 1/24/25 10:17 AM, Phil Stracchino wrote:
Just FYI I am very unconvinced of the benefit of attribute spooling, at
least with a MySQL back-end, because the spooling implementation in
Bacula's MySQL driver is very bad.
(It writes attributes into a table to avoid writing them into a table,
then
On 1/24/25 10:48, Bill Arlofski via Bacula-users wrote:
Hello Roberto,
Any chance you have disabled attribute spooling? ie: `SpoolAttributes = no` in
a Job.
This is the first thing that I can see which could inexplicably slow things
down. If you are monitoring the MySQL
server/service, hav
On 1/24/25 10:36, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one example, I made a (full) backup of one server. The full backup of
137
Em 24/01/2025 12:48, Bill Arlofski via Bacula-users escreveu:
On 1/24/25 8:36 AM, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one exa
On 1/24/25 8:36 AM, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one example, I made a (full) backup of one server. The full backup of
1
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one example, I made a (full) backup of one server. The full backup of
137.7GB took 4h 25m.
The restore of the
Mandi! B. Smith
In chel di` si favelave...
> I have a ZFS pool as a dedicated Bacula spool.
To be clear: the ZFS pool is only used for the bacula spool? Or 'spool' is
in 'loose' meaning, eg contain the data 'spooled' from other servers that
have to be put on LTO?
I'm fighting also on this, bec
This is all running on TrueNAS, so BSD. The HBA is an LSI 9220-8i. The NIC
is 10Gb, but not relevant here because the data and the tape library are
all on the same system.
The disks are SATA. 32GB RAM, but I don't see the system running out of RAM
while spooling/despooling.
The JBOD enclosures are
On 17/09/2024 08:58, B. Smith wrote:
Good evening,
I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB
drives, configured as three mirrors of two striped disks. My tape drive
is LTO8. All the data is local to the server. When I despool without
simultaneously spooling anot
Apologies, I misstated the configuration. I do in fact have striped mirrors.
On Mon, Sep 16, 2024, 8:59 PM Phil Stracchino wrote:
> On 9/16/24 18:58, B. Smith wrote:
> > Good evening,
> >
> > I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB
> > drives, configured as three
On 9/16/24 18:58, B. Smith wrote:
Good evening,
I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB
drives, configured as three mirrors of two striped disks.
OK, just one observation: That's generally considered the wrong way to
do it. The normally preferred arrangement
Good evening,
I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB
drives, configured as three mirrors of two striped disks. My tape drive is
LTO8. All the data is local to the server. When I despool without
simultaneously spooling another job, my despool rate is about 280 MB/s
Thanks Joe for this info. It looks like it is a client issue as it is
written in the document (many small files; operations like stat(),
fstat() consume 100% cpu on the client).
I think that implementing autochanger solves my problems (mutliple
clients will write at the same time and utilize b
Ziga,
It is sad to hear your having issues with Bacula. Some of your concerns
have been here since 2005. The only thing you can do to speed things up
is to spool the whole job to very fast disk(SSD), break up your large
job(number of files), make sure your database is on very fast disk(SSD)
and ha
Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:
-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottlene
On 10/6/20 3:45 AM, Žiga Žvan wrote:
I believe that I have my spooling attributes set correctly on jobdefs
(see bellow). Spool attributes = yes; Spool data defaults to no. Any
other idea for performance problems?
Regard,
Ziga
The client version is very old. First try updating the client to
LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220
América Latina
[ http://bacula.lat/]
Original Message
From: Žiga Žvan
Sent: Tuesday, October 6, 2020 03:11 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] performance&design&configuration challen
I believe that I have my spooling attributes set correctly on jobdefs
(see bellow). Spool attributes = yes; Spool data defaults to no. Any
other idea for performance problems?
Regard,
Ziga
JobDefs {
Name = "bazar2-job"
Type = Backup
Level = Incremental
Client = bazar2.kranj.cetrtapot.
Hi,
I'm having some performance challenges. I would appreciate some educated
guess from an experienced bacula user.
I'm changing old backup sw that writes to tape drive with bacula
writing to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000
files, 90
On 10/5/20 9:20 AM, Žiga Žvan wrote:
Hi,
I'm having some performance challenges. I would appreciate some
educated guess from an experienced bacula user.
I'm changing old backup sw that writes to tape drive with bacula
writing to disk. The results are:
a) windows file server backup from a d
Hi,
I'm having some performance challenges. I would appreciate some educated
guess from an experienced bacula user.
I'm changing old backup sw that writes to tape drive with bacula
writing to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000
files, 900
My one comment is that if you really want to keep this data forever,
then you should *really* be making multiple copies to tape, and then
also re-reading them and comparing them against the master data.
I also think that the biggest time sink will be the finding and
building of the daily tar fi
On 6/27/2015 1:37 AM, Andrew Noonan wrote:
> On Fri, Jun 26, 2015 at 2:17 PM, Ana Emília M. Arruda
> wrote:
>> Are you going to generate a .tar of about 250TB every day? Which will
>> be the nature of your restores? You´re going to need always the
>> restore of the whole data set or occasional
On Fri, Jun 26, 2015 at 2:17 PM, Ana Emília M. Arruda
wrote:
> Hello Andrew,
>
> On Fri, Jun 19, 2015 at 5:10 PM, Andrew Noonan wrote:
>>
>> Hi all,
>>
>> After wrestling with a Dell TL4000 in the thread marked "Dell
>> TL4000 labeling timeout", it looks like the autochanger is going to be
>
Hello Andrew,
On Fri, Jun 19, 2015 at 5:10 PM, Andrew Noonan wrote:
> Hi all,
>
> After wrestling with a Dell TL4000 in the thread marked "Dell
> TL4000 labeling timeout", it looks like the autochanger is going to be
> fine thanks to the efforts of several people, especially Ana, on this
>
Hi all,
After wrestling with a Dell TL4000 in the thread marked "Dell
TL4000 labeling timeout", it looks like the autochanger is going to be
fine thanks to the efforts of several people, especially Ana, on this
list.
Moving forward, I'm about to start running jobs to at first
backfill a
Zitat von Rodrigo Abrantes Antunes :
> Hi, when restoring, listing files, backing up, purging or pruning mysql
> process uses 100% CPU and the machine is unusable, and such operations last
> to long. Doing some research I found that this can be related to database
> indexes, but I didn't understa
Hi, when restoring, listing files, backing up, purging or pruning mysql
process uses 100% CPU and the machine is unusable, and such operations last
to long. Doing some research I found that this can be related to database
indexes, but I didn't understanf well what I need to do .Here is the output
Disregard. It's flying along now at the expected speeds. I blame sun
spots.
James
> -Original Message-
> From: James Harper [mailto:james.har...@bendigoit.com.au]
> Sent: Monday, 17 October 2011 4:14 PM
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-use
I'm revisiting a remote backup, and am troubled by the fact that Bacula
appears to be making using of only a fraction of the available
bandwidth.
iperf tells me there is around 750KBits/second of usable TCP bandwidth
in the fd->sd direction, but Bacula only reports a Bytes/sec rate of
30Kbytes/sec
On Mon, 2011-10-03 at 10:34 +0200, mayak-cq wrote:
>
> zurich and copenhagen are 22.589 ms apart on a shared 100mbit
> connection -- using the bandwidth delay product:
>
> theoretical
> bandwidth delayproductBits bitsPerByte
> bytesInWindow
> 500 000 000 * .022589 = 11 2
On Sun, 2011-10-02 at 11:45 +0200, Radosław Korzeniewski wrote:
> Hello,
>
> 2011/9/30 reaper
>
> sounds like your sysctl.conf needs tweaking? have you tried
> something like this on both sides?
>
> kernel.msgmnb = 65536
> kernel.msgmax = 65536
>
Hello,
2011/9/30 reaper
> sounds like your sysctl.conf needs tweaking? have you tried something like
> this on both sides?
>
> kernel.msgmnb = 65536
> kernel.msgmax = 65536
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296
>
>
These are the IPC (inter process communication) kernel p
On Sat, 2011-10-01 at 07:57 -0700, reaper wrote:
> i'm going through a similar issue -- zurich, barca, copenhagen ...
>
> mayak-cq, can you test bacula performance under your conditions? With and
> without ssh (or something similar) tunnel.
hi reaper,
sure -- i can do that tomorrow.
are you
i'm going through a similar issue -- zurich, barca, copenhagen ...
mayak-cq, can you test bacula performance under your conditions? With and
without ssh (or something similar) tunnel.
+--
|This was sent by rea...@lmn.name via Ba
On Fri, 2011-09-30 at 02:18 -0700, reaper wrote:
> if i understand you correctly, bacula is only using a 128k -- way too
> small? curious -- have you played with "Maximum Network Buffer Size" ?
> does this help?
>
> Yes, that's correct, bacula can only scale window to 128k that's why
> throughput
if i understand you correctly, bacula is only using a 128k -- way too small?
curious -- have you played with "Maximum Network Buffer Size" ? does this help?
Yes, that's correct, bacula can only scale window to 128k that's why throughput
gets limited to 10Mbit/s. With ssh tunnel between client an
On Thu, 2011-09-29 at 23:43 -0700, reaper wrote:
> i have noticed that scp is not a good measure of throughput -- i do not know
> why. i use an openvpn tunnel between sites and loose about 20% of throughput
> due to the tunnel. check window size on distant machine (using wireshark) to
> verify
i have noticed that scp is not a good measure of throughput -- i do not know
why. i use an openvpn tunnel between sites and loose about 20% of throughput
due to the tunnel. check window size on distant machine (using wireshark) to
verify that some upstream device is not changing it.
No, no, no.
On Thu, 2011-09-29 at 21:12 -0700, reaper wrote:
> sounds like your sysctl.conf needs tweaking? have you tried something like
> this on both sides?
>
> kernel.msgmnb = 65536
> kernel.msgmax = 65536
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296
>
> # long fat pipes
> net
sounds like your sysctl.conf needs tweaking? have you tried something like this
on both sides?
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
# long fat pipes
net.core.wmem_max = 8388608
net.core.rmem_max = 8388608
Yes, I tried t
On Thu, 2011-09-29 at 04:20 -0700, reaper wrote:
> Hello.
>
> I saw this post recently
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg47393.html
> and it seems I'm affected by this problem too. Bacula shows extremely low
> performance in networks with high rtt. I have sd i
Hello.
I saw this post recently
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg47393.html
and it seems I'm affected by this problem too. Bacula shows extremely low
performance in networks with high rtt. I have sd in Germany and client in USA.
Bacula can make backups on speed
Hi Kamil,
2 days ago I had got the same problem like you.
Open client config file for windows and put "Maximum Network Buffer Size =
65536" in FileDaemon :)
It will resolve the problem
Mariusz.
+--
|This was sent by mariusz
Hello
After enabling TLS, I've noticed significant performance drawback.
I've made some tests for both Linux (Fedora 13) and Windows XP clients. I've
used
250MB tar archive. One file. No compression.
BACKUP:
Windows TLS 850 kB/s
Windows NO_TLS 8500 kB/s
Linux T
On Fri, Jul 08, 2011 at 08:30:17AM +0200, Adrian Reyer wrote:
> Speed improved many many times. My incremental backup finished after
> just 10 minutes while it took 2h earlier.
This had been the benefit of using InnoDB over MyISAM. However, at 12GB
RAM and 8900 File entries (12GB file on disk)
cula-users@lists.sourceforge.net
Subject: [Bacula-users] performance problem
I currently have 3 clients doing a full backup (simultaneously). According
to "status client" one is getting 300kb/s (this one is my director and
storage server machine), one is getting 225kb/s, and one is
I currently have 3 clients doing a full backup (simultaneously). According
to "status client" one is getting 300kb/s (this one is my director and
storage server machine), one is getting 225kb/s, and one is getting 50kb/s.
I've disabled AV on access scanning for the bacula-fd.exe process. I have
sof
On Tue, 26 Jul 2011 06:18:25 -0700
Steve Ellis wrote:
> >> Another point, even with your current config, if you
> >> aren't doing data spooling you are probably slowing things down
> >> further, as well as wearing out both the tapes and heads on the
> >> drive with lots of shoeshining.
> > (I'm a
On 7/26/2011 5:04 AM, Konstantin Khomoutov wrote:
> On Tue, 26 Jul 2011 00:18:05 -0700
> Steve Ellis wrote:
>
> [...]
>> Another point, even with your current config, if you
>> aren't doing data spooling you are probably slowing things down
>> further, as well as wearing out both the tapes and hea
I disabled the Compression and my speed rate boosted from 8.2 MB/s to 40.8
MB/s but I'm still using Bacula encryption.
On Mon, Jul 25, 2011 at 10:14 PM, James Harper <
james.har...@bendigoit.com.au> wrote:
> > 2011/7/25 Rickifer Barros :
> > > Hello Guys...
> > >
> > > This weekend I did a backup
On Tue, 26 Jul 2011 00:18:05 -0700
Steve Ellis wrote:
[...]
> Another point, even with your current config, if you
> aren't doing data spooling you are probably slowing things down
> further, as well as wearing out both the tapes and heads on the drive
> with lots of shoeshining.
(I'm asking as
>> I was under the impression that_all_ LTO4 drives implemented encryption
>> (though if having the data traversing the LAN encrypted is your goal,
>> you'd still have to do something). I don't know enough about it to know
>> how good the encryption in LTO4 is, however (or for that matter, how th
> >> Disable software compression. The tape drive will compress much
faster
> >> than the client.
> >>
> > If you can find compressible patterns in the encrypted data stream
then
> > you are not properly encrypting it. The only option would be to
compress
> > before encryption which means you can't
On 7/25/2011 6:14 PM, James Harper wrote:
>> 2011/7/25 Rickifer Barros:
>>> Hello Guys...
>>>
>>> This weekend I did a backup with a size of 41.92 GB that took 1 hour
> and 24
>>> minutes with a rate of 8.27 MB/s.
>>>
>>> My Bacula Server is installed in a IBM server connected in a Tape
> Drive LTO
> 2011/7/25 Rickifer Barros :
> > Hello Guys...
> >
> > This weekend I did a backup with a size of 41.92 GB that took 1 hour
and 24
> > minutes with a rate of 8.27 MB/s.
> >
> > My Bacula Server is installed in a IBM server connected in a Tape
Drive LTO4
> > (120 MB/s) via SAS connection (3 Gb/s).
On 07/25/11 11:13, John Drescher wrote:
> On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
> wrote:
>> I did this beforebut, I didn't know how check in Debian if it really is
>> being compressed by the tape drive. The only thing that I got was the bacula
>> information about the SD and FD Wri
OK John...I'll test it.
On Mon, Jul 25, 2011 at 12:13 PM, John Drescher wrote:
> On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
> wrote:
> > I did this beforebut, I didn't know how check in Debian if it really
> is
> > being compressed by the tape drive. The only thing that I got was the
On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
wrote:
> I did this beforebut, I didn't know how check in Debian if it really is
> being compressed by the tape drive. The only thing that I got was the bacula
> information about the SD and FD Written and the "mt" command in Linux don't
> say
I did this beforebut, I didn't know how check in Debian if it really is
being compressed by the tape drive. The only thing that I got was the bacula
information about the SD and FD Written and the "mt" command in Linux don't
say me the real data size of the volume, so I chose to trust on the so
2011/7/25 Rickifer Barros :
> Hello Guys...
>
> This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
> minutes with a rate of 8.27 MB/s.
>
> My Bacula Server is installed in a IBM server connected in a Tape Drive LTO4
> (120 MB/s) via SAS connection (3 Gb/s).
>
> I'm using En
I forgot to say that the files I backed up are locally in the Bacula Server.
On Mon, Jul 25, 2011 at 11:48 AM, Rickifer Barros
wrote:
> Hello Guys...
>
> This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
> minutes with a rate of 8.27 MB/s.
>
> My Bacula Server is install
Hello Guys...
This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
minutes with a rate of 8.27 MB/s.
My Bacula Server is installed in a IBM server connected in a Tape Drive LTO4
(120 MB/s) via SAS connection (3 Gb/s).
I'm using Encryption and Compression Gzip6.
I think th
On Wed, Jul 06, 2011 at 11:08:44AM -0400, Phil Stracchino wrote:
> for table in $(mysql -N --batch -e 'select
> concat(table_schema,'.',table_name) from information_schema.tables where
> engine='MyISAM' and table_schema not in
> ('information_schema','mysql')'); do mysql -N --batch -e "alter table
mailto:eric.bolleng...@baculasystems.com]
Sent: Wednesday, July 6, 2011 11:20 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Performance options for single large (100TB) server
backup?
Hello,
On 07/06/2011 04:20 PM, Florian Heigl wrote:
> Saving multiple streams is somethi
Hello,
On 07/06/2011 04:20 PM, Florian Heigl wrote:
> Saving multiple streams is something that has been proven as a
> solution for many years, and where that is still too slow NDMP comes
> into place. (in case of ZFS NDMP is still at a unusable stage)
>
> 100TB is a lot, but I wonder if everyone
On 07/06/11 10:41, Adrian Reyer wrote:
> On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
>> should I use for my tables?" is MyISAM.[1] At this point, wherever
>> possible, EVERYONE should be using InnoDB.
>
> I will, if the current backup ever finishes. For a start on MySQL 5.1
>
ving it
to a closed source one if that was possible (it's not like I'm a large company
here at all).
-Original Message-
From: Florian Heigl [mailto:florian.he...@gmail.com]
Sent: Wednesday, July 6, 2011 09:20 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users]
On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
> should I use for my tables?" is MyISAM.[1] At this point, wherever
> possible, EVERYONE should be using InnoDB.
I will, if the current backup ever finishes. For a start on MySQL 5.1
though (Debian squeeze). I am aware InnoDB has a
Hi,
Breaking the server into multiple file daemons sounds as broken as
breaking the stuff amanda users had to do (break your filesystem into
something that fits a tape).
Saving multiple streams is something that has been proven as a
solution for many years, and where that is still too slow NDMP co
On 07/06/11 08:04, Adrian Reyer wrote:
> Hi,
>
> I am using bacula for a bit more than a month now and the database gets
> slower and slower both for selecting stuff and for running backups as
> such.
> I am using a MySQL database, still myisam tables and I am considering
> switching to InnoDB tab
Hi,
I am using bacula for a bit more than a month now and the database gets
slower and slower both for selecting stuff and for running backups as
such.
I am using a MySQL database, still myisam tables and I am considering
switching to InnoDB tables or postgresql.
Amongst normal fileserver data the
t that alone
probably will not solve your problem. I think you're going to have to do a
lot of different configurations and test which ones work best for your
design parameters (i.e. questions like "How long can I go w/o a full
backup" and "How long can I stand a complete dis
Am 28.06.2011 18:40, schrieb Steve Costaras:
>
>
> How would the the various parts communicate if you're running multiple
> instances on different ports? I would think just by creating multiple
> jobs would create multiple socket streams and do the same thing.
I should have gotten another coff
hink you're going to have to do a
lot of different configurations and test which ones work best for your
design parameters (i.e. questions like "How long can I go w/o a full
backup" and "How long can I stand a complete disaster recovery restore
taking").
From: "Ste
nd "How long can I stand a complete disaster recovery restore
taking").
> From: "Steve Costaras"
> Subject: [Bacula-users] Performance options for single large (100TB)
> server backup?
> To: bacula-users@lists.sourceforge.net
> Message-ID:
> Conte
Hi Out of curiosity, why do you do such "forklift replacements" when ZFS
supports replacing individual drives, letting the pool resilver and then
automatically grow to the new size? roy - Original Message -
> I have been using Bacula for over a year now and it has been providing
> 'passab
Problem is not really just tape I/O speeds but the ability to get data
to it. I.e. the SD is running at about 50% cpu overhead right now
(single core) so it could possible handle (2) LTO4 drives assuming a new
SD is not spawned off per drive?
I don't really need 'rait' itself as that wou
How would the the various parts communicate if you're running multiple
instances on different ports? I would think just by creating multiple
jobs would create multiple socket streams and do the same thing.
On 2011-06-28 02:09, Christian Manal wrote:
- File daemon is single threaded so
On 6/27/2011 8:43 PM, Steve Costaras wrote:
>
>
>
>
> - How to stream a single job to multiple tape drives. Couldn't
> figure this out so that only one tape drive is being used.
>
There are hardware RAIT controllers available from Ultera
(http://www.ultera.com/tapesolutions.htm). A R
> - File daemon is single threaded so is limiting backup performance. Is there
> was a way to start more than one stream at the same time for a single machine
> backup? Right now I have all the file systems for a single client in the same
> file set.
>
> - Tied in with above, accurate backups
I have been using Bacula for over a year now and it has been providing
'passable' service though I think since day one I have been streching it to
it's limits or need a paradigm shift in how I am configuring it.
Basically, I have a single server which has direct atached disk (~128TB / 112
dri
Hi,
> The issue, I imagine with transfer rates is between the bacula-fd and
> bacula-sd.
Correct
> Do we presume the -sd is in Sydney? You don't say what speed
> the Sydney ADSL2+ link is (though apparently it can manage at least
> 2.2MByte/sec).
24mbit down, 1mbit up
> Is tha
Hi,
On Mon, 11 Apr 2011, Peter Hoskin wrote:
> I'm using bacula to do backups of some remote servers, over the Internet
> encapsulated in OpenVPN (just to make sure things are encrypted and kept off
> public address space).
>
> The bacula-fd is in Montreal Canada with 100mbit Ethernet. I also ha
Hi,
I'm using bacula to do backups of some remote servers, over the Internet
encapsulated in OpenVPN (just to make sure things are encrypted and kept off
public address space).
The bacula-fd is in Montreal Canada with 100mbit Ethernet. I also have
another bacula-fd in Canberra Australia on
Hi
I am using Bacula inside an Cent OS openVZ container on a
virtualization cluster. I tried with and without compression and/or
encryption and the rates are very various.
I am backing up a KVM client on the same nodes of the cluster and an
external mac client.
I can have rates of more than 30 or
On 23/09/10 15:26, Andrés Yacopino wrote:
> I think i am getting worst performance because of ramdon disk access
> speed, is that true?
>
Yes. If you use the time command on your tar process you will find it is
similarly slow.
Actually it's not so much random disk access speed as the fixed tim
> I need to improve performance of a Job which backups 150 files (mail
> and File Server).
> I was compressing the files on disk in some tgz files first (tar and
> gzip) ,then backuping then on tape with Bacula, i was getting about:
>
> Job write elapsed time = 00:32:16, Transfer rate = 44.93 M
I need to improve performance of a Job which backups 150 files (mail
and File Server).
I was compressing the files on disk in some tgz files first (tar and
gzip) ,then backuping then on tape with Bacula, i was getting about:
Job write elapsed time = 00:32:16, Transfer rate = 44.93 M Bytes/seco
Hi,
i'm talking about despooling speed (not overall). But it's the same
speed without spooling, but it's clear that over the network i can reach
max. 90MB/s.
I use 32 bit because of the OpenVZ-template but just some minutes ago, i
create the same machine as 64bit, and the speed is a little bit
Hi,
i need some tips for a backup-server.
On a new backup-server i reach a backup-speed to an LTO-4 drive of
70MB/s only.
Here the config:
bacula 5.0.3 self compiled on debian squeeze 32bit OpenVZ-vm (Proxmox as
virtualisation platform).
neo200s jukebox (1 lto-4 SAS-drive) connected via a LSI SA
Hi,
On a bacula installation that I originally set up on a machine running
Debian "etch", everything was running smoothly.
Since etch has been EOL'ed, it was necessary to upgrade it, and we
recently did do an upgrade to lenny. This also involved an upgrade of
bacula from 1.38 (the version in etch
Citando James Harper :
> Does MySQL have a 'profiler' tool like MSSQL does? I spend most of my
> time in MSSQL and if some operation is running slow I just attach the
> profiler to it and capture the queries and focus on the ones that are
> taking most of the time.
>
> James
What is the impact
On Saturday 20 June 2009 08:51:53 Tom Sommer wrote:
> Tom Sommer wrote:
> > Mike Holden wrote:
> >> Jari Fredriksson wrote:
> INSERT INTO Filename( Name )
> SELECT a.Name
> FROM (
>
> SELECT DISTINCT Name
> FROM batch
> ) AS a
> WHERE NOT
> EXISTS (
> >>
1 - 100 of 188 matches
Mail list logo