Hello,
2013/10/7 bdelagree
> > I do not follow this thread from the beginning, so I could be wrong
> about some tips.
>
> > You have a 11M files in single backup job. If your job name is not
> misleading all your files are located on NFS share. Right?
> > If yes, this is your main bottleneck. NF
> I do not follow this thread from the beginning, so I could be wrong about
> some tips.
> You have a 11M files in single backup job. If your job name is not misleading
> all your files are located on NFS share. Right?
> If yes, this is your main bottleneck. NFS is not the best protocol for th
Hello,
2013/10/7 bdelagree
> Hello everyone!
>
> After applying the correct settings and restart the good services here are
> the results ... :P
> They are catastrophic!
>
I do not follow this thread from the beginning, so I could be wrong about
some tips.
You have a 11M files in single backup
Hello everyone!
After applying the correct settings and restart the good services here are the
results ... :P
They are catastrophic!
My full this weekend took 8 hours more!
I think problems come from my little spools, 24GB per drive and 3Gb by jobs (I
have 8 jobs)
Maybe I miscalculated my spo
> On Mon, 30 Sep 2013 00:07:00 -0700, bdelagree said:
>
> Hi everyone!
>
> The DataSpooling has not changed my backup. (See the end of this post)
> 1day and 14hours for 390Gb :(
>
> By cons I just saw that on Friday I restarted only StorageDaemon, was it also
> restart Director and Fi
> The DataSpooling has not changed my backup. (See the end of this post)
> 1day and 14hours for 390Gb :(
>
> By cons I just saw that on Friday I restarted only StorageDaemon, was it
> also restart Director and FileDaemon?
>
> Do you think that enabling compression could improve backup when the
Hi everyone!
The DataSpooling has not changed my backup. (See the end of this post)
1day and 14hours for 390Gb :(
By cons I just saw that on Friday I restarted only StorageDaemon, was it also
restart Director and FileDaemon?
Do you think that enabling compression could improve backup when
Just for you information, here are the modifications:
For the NFS server I created two jobs, one for system and another one for the
directory that contains the millions of files.
I created the directory /var/lib/spool/drive0 and /var/lib/spool/drive1
I then did a chown-R bacula: bacula /var/lib/
Hi everyone!
Sorry for my short absence but I've been busy with other little problem.
I had to create a virtual machine under OS9 for one of my users
I had forgotten how the old system was very basic !
: p
Finally tonight is my monthly Full Backup.
I wish to change my jobs and set up the DataSpo
Zitat von bdelagree :
> Hello,
>
> Thank you for the quick response.
>
> My library is connected to a dedicated server only to services (PDC,
> DHCP, DNS, LDAP, and Bacula)
> This server is not designed to host files, so he has little space.
> In addition, the MySql database is already 35Gb...
Am 23.09.2013 08:47, schrieb bdelagree:
> Hello,
>
> This summer we invested in a PowerVault TL2000 library with two LTO5 drives
> to safeguard our various servers.
>
> Today two of my servers take to save a lot because they contain many small
> files for low volume (see the bottom of post)
> All
On 23/09/13 07:47, bdelagree wrote:
> Hello,
>
> This summer we invested in a PowerVault TL2000 library with two LTO5 drives
> to safeguard our various servers.
>
> Today two of my servers take to save a lot because they contain many small
> files for low volume (see the bottom of post)
> All my
On 9/23/2013 9:32 AM, bdelagree wrote:
> Hello,
>
> Thank you for the quick response.
>
> My library is connected to a dedicated server only to services (PDC, DHCP,
> DNS, LDAP, and Bacula)
> This server is not designed to host files, so he has little space.
> In addition, the MySql database is al
Hello,
Thank you for the quick response.
My library is connected to a dedicated server only to services (PDC, DHCP, DNS,
LDAP, and Bacula)
This server is not designed to host files, so he has little space.
In addition, the MySql database is already 35Gb...
I can dedicate reasonably 50Gb on this
Zitat von bdelagree :
> Hello,
>
> This summer we invested in a PowerVault TL2000 library with two LTO5
> drives to safeguard our various servers.
>
> Today two of my servers take to save a lot because they contain many
> small files for low volume (see the bottom of post)
> All my other serv
Hello,
This summer we invested in a PowerVault TL2000 library with two LTO5 drives to
safeguard our various servers.
Today two of my servers take to save a lot because they contain many small
files for low volume (see the bottom of post)
All my other servers backups quickly (20,000 KB/s to 30,0
I started two backups maybe 12 hours ago. Normally full backups run 1-2
h max, but this suddenly...
From database I see no locks, but they geep inserting to batch -table.
I have 12 gigabytes RAM, and given couple gigs to MySQL too. Database
should not be bottle neck.
How can it be so slow. Two
Ok error@blocksize :D Sorry
regards
Tobias
# Stegbauer Datawork
# Tobias Dinse
# Oberjulbachring 9, 84387 Julbach
On 12.05.2011 11:29, Tobias Dinse wrote:
> Hi,
>
> since i have upgraded our Backup Server to Debian Squeeze and Bacula
> 5.0.2 the Jobs are only write with ~ 5 MB/s.
>
> status st
Hi,
since i have upgraded our Backup Server to Debian Squeeze and Bacula
5.0.2 the Jobs are only write with ~ 5 MB/s.
status storage:
Device "IBMLTO4-sd" (/dev/nst0) is mounted with:
Volume: MITT01
Pool:MittwochPool
Media type: LTO4
Total Bytes=157,171,864,755
> On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
>
> > I did some tests with different gzip levels and with no compression
> > at all. It makes a difference but not as expected. Without
> > compression I still have a rate of only 11346.1 KB/s. Anything else
> > I should try?
>
> Are you sure the cro
On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
> I did some tests with different gzip levels and with no compression at
> all. It makes a difference but not as expected. Without compression I
> still have a rate of only 11346.1 KB/s. Anything else I should try?
Are you sure the cross-over connecti
On 1/9/2011 6:19 PM, Oliver Hoffmann wrote:
>
>>> Hi all,
>>>
>>> I do full backups at the weekend and it just takes too long. 12h or so.
>>> bacula does one job after the other and I have a max. transfer rate of
>>> 11 to 12 MBytes/second due to the 100Mbit connection.
>>>
>>> For testing purpose
>> Hi all,
>>
>> I do full backups at the weekend and it just takes too long. 12h or so.
>> bacula does one job after the other and I have a max. transfer rate of
>> 11 to 12 MBytes/second due to the 100Mbit connection.
>>
>> For testing purpose I connected one client via crosslink (1Gbit on
>> bo
I did some tests with different gzip levels and with no compression at
all. It makes a difference but not as expected. Without compression I
still have a rate of only 11346.1 KB/s. Anything else I should try?
Cheers,
Oliver
> On Saturday 08 January 2011 11:46:11 Mister IT Guru wrote:
>
>> On
On 1/8/2011 4:46 AM, Mister IT Guru wrote:
> On 07/01/2011 14:53, Rory Campbell-Lange wrote:
>> On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
>>> I do full backups at the weekend and it just takes too long. 12h or so.
>>> bacula does one job after the other and I have a max. transfer rate of
>>
On 1/7/2011 9:48 AM, Oliver Hoffmann wrote:
> Hi all,
>
> I do full backups at the weekend and it just takes too long. 12h or so.
> bacula does one job after the other and I have a max. transfer rate of
> 11 to 12 MBytes/second due to the 100Mbit connection.
>
> For testing purpose I connected one
On Saturday 08 January 2011 11:46:11 Mister IT Guru wrote:
> On 07/01/2011 14:53, Rory Campbell-Lange wrote:
> > On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
> >> I do full backups at the weekend and it just takes too long. 12h or so.
> >> bacula does one job after the other and I have a max.
On 07/01/2011 14:53, Rory Campbell-Lange wrote:
> On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
>> I do full backups at the weekend and it just takes too long. 12h or so.
>> bacula does one job after the other and I have a max. transfer rate of
>> 11 to 12 MBytes/second due to the 100Mbit conne
On Friday 07 January 2011 16:48:07 Oliver Hoffmann wrote:
> Hi all,
>
> I do full backups at the weekend and it just takes too long. 12h or so.
> bacula does one job after the other and I have a max. transfer rate of
> 11 to 12 MBytes/second due to the 100Mbit connection.
>
> For testing purpose
On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
> I do full backups at the weekend and it just takes too long. 12h or so.
> bacula does one job after the other and I have a max. transfer rate of
> 11 to 12 MBytes/second due to the 100Mbit connection.
>
> For testing purpose I connected one clien
Hi all,
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
For testing purpose I connected one client via crosslink (1Gbit on
both sides) to the server
Sean M Clark wrote:
> Carlo Filippetto wrote:
>
>> Hi,
>> I would like to know if is true that I have so slow troughput as this:
>>
> [...]
>
>> FULL
>> -
>> Elapsed time: 1 day 22 hours 13 mins 37 secs
>>
> [...]
>
>> Rate: 371.7 KB/s
>> Sof
Carlo Filippetto wrote:
> Hi,
> I would like to know if is true that I have so slow troughput as this:
[...]
> FULL
> -
> Elapsed time: 1 day 22 hours 13 mins 37 secs
[...]
> Rate: 371.7 KB/s
> Software Compression: 15.5 %
>
[...]
> All my jobs have the maxi
Hi Carlo,
for any modern hardware your rates sound low.
Below is an example I get in my home system (Core2 Duo, 8GB memory, CentOS
5.4 Linux 64-bit), writing to external USB disk, with no compression.
Backing up a local disk, catalog database on the same physical disk too (not
an ideal combina
Hi,
I would like to know if is true that I have so slow troughput as this:
*CATALOG
---
** FD Bytes Written: 478,808,703 (478.8 MB)
SD Bytes Written: 478,809,069 (478.8 MB)
Rate: 402.0 KB/s
Software Compression: None
INCREMENTAL
--
SD Bytes Writt
Take also a look at your dir database setting.
( postgresql or mysql )
If you are using default distro's settings they are certainly to low.
check the ml & wiki about this.
Il Neofita wrote:
> Hi
> I am using EXt3
> and yes I also have small
> Probably
> 50% < 2M
> 40% < 10M
> 10%>40M
>
> On Th
Hi
I am using EXt3
and yes I also have small
Probably
50% < 2M
40% < 10M
10%>40M
On Thu, May 28, 2009 at 8:39 AM, Uwe Schuerkamp wrote:
> On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
> > First of all thank you for the answer
> > No I do not use compression in my file set
> > O
First of all thank you for the answer
No I do not use compression in my file set
Options {
signature = MD5
}
I tried to upload with sftp
Uploading testfile to /tmp/terrierj
testfile 100% 83MB 41.4MB/s 00:02
There is only a problem,
I have th
On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
> First of all thank you for the answer
> No I do not use compression in my file set
> Options {
> signature = MD5
> }
> I tried to upload with sftp
>
> Uploading testfile to /tmp/terrierj
> testfile
On Thu, May 28, 2009 at 06:01:06AM -0400, Il Neofita wrote:
> I connected the backup server and the client with a crossover cable at 1G
> however
> Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
> What can I check?
> I am using SAS disks
>
> With ethtool I have
> Speed: 1000Mb/s
> t
Hi,
there is 5Gb of data and the average speed is 9mb at sec. The speed is
slow .
Try to copy a big file from server to client (or viceversa) and se
with iptraf the speed of copy. I think there is no problem with bacula
but in the distro.
Daniele
Il giorno 28/mag/09, alle ore 12:01, I
On Thursday 28 May 2009 13:01:06 Il Neofita wrote:
> I connected the backup server and the client with a crossover cable at 1G
> however
> Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
> What can I check?
> I am using SAS disks
>
> With ethtool I have
> Speed: 1000Mb/s
> therefore
I connected the backup server and the client with a crossover cable at 1G
however
Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
What can I check?
I am using SAS disks
With ethtool I have
Speed: 1000Mb/s
therefore is correct
--
> Hi there,
> I've been having some problems attempting to increase the write speed to my
> tape drive through Bacula.
>
> If I use the operating system to communicate directly with the tape drive,
> I get the appropriate read and write speeds but using Bacula, I get a third
> of the speed. I hav
Hi there,
I've been having some problems attempting to increase the write speed to my
tape drive through Bacula.
If I use the operating system to communicate directly with the tape drive, I
get the appropriate read and write speeds but using Bacula, I get a third of
the speed. I have tried spo
Hi,
Jonas Björklund has spoken, thus:
> Hello,
>
> I get very poor performance with compression on a client. It's a Sun Fire
> V490 with 4 CPUs on 1350Mhz and 16GB memory.
I'm having similiar problems with bacula here (but different hardware).
filed: Sun Blade 1500 (1 CPU 1503Mhz 1GB memory)
di
> > Hello,
> >
> > I get very poor performance with compression on a client.
> It's a Sun Fire
> > V490 with 4 CPUs on 1350Mhz and 16GB memory.
> >
> >JobId: 11
> >Job:client1.2006-12-04_16.34.10
> >Backup Level: Full
> >Client:
> On Tue, 5 Dec 2006 09:11:14 +0100 (CET), Jonas Bjorklund said:
>
> Hello,
>
> I get very poor performance with compression on a client. It's a Sun Fire
> V490 with 4 CPUs on 1350Mhz and 16GB memory.
>
>JobId: 11
>Job:client1.2006-12-04_16.34.10
On Tue, 5 Dec 2006, Jonas Björklund wrote:
I get very poor performance with compression on a client. It's a Sun Fire
V490 with 4 CPUs on 1350Mhz and 16GB memory.
Seems like the Sun server is slow. I got a little bit better performance
when I used GZIP1 instead of GZIP (GZIP6).---
Hello,
I get very poor performance with compression on a client. It's a Sun Fire
V490 with 4 CPUs on 1350Mhz and 16GB memory.
JobId: 11
Job:client1.2006-12-04_16.34.10
Backup Level: Full
Client: "sasma" sparc-sun-solaris2.1
I've seen similar data on my backups, but generally, only with very
small backup sizes (less than 1GB). When I back up over 1GB, the rates
increase dramatically, although backup from the Windows server is still
only about 1/2 to 1/3 the Linux server rate. Before you get too
concerned, try a big
Hi. I have been working with bacula for some months, i love this software, my current problem is this one:My Test Server.I'm running bacula server 1.38.11 on FreeBSD 6.1-p3Mysql 4.1.20Tape HP Storage Works 232 External 200GB Compress
HD 200 IDE 7200 RPMAMD Duron 1.6 Ghz512 RAMClients:2 Win NT 4
On 7/14/06, Gabriele Bulfon <[EMAIL PROTECTED]> wrote:
>
>
> Hello,
> I have some bacula installations on SunFire 280R Sparc machines, with Solaris
> 10.
> These machines apperar to be very very slow with respect to other
> installations
> (such as v20z) with same LTO2 device.
> As you can see fr
On 7/19/06, Gabriele Bulfon <[EMAIL PROTECTED]> wrote:
Do you mean that the whole 280R machine maybe running at half-duplex?!I'm not sure what interface you are using for the backups (probably an eriX),
but to get the link status and link capabilities from the Solaris side you can e.g. use this
://www.sonicle.com
>> ---
>> --- Da: Hristo Benev <[EMAIL PROTECTED]>
>> A: Gabriele Bulfon <[EMAIL PROTECTED]>
>> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-users@lists.sourc
--Da: Kern Sibbald <[EMAIL PROTECTED]>A: bacula-users@lists.sourceforge.net Cc: Gabriele Bulfon <[EMAIL PROTECTED]> Data: 19 luglio 2006 20.53.39 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROne user had similar problems with his Sparc a
-
>--- Da: Hristo Benev <[EMAIL PROTECTED]>
> A: Gabriele Bulfon <[EMAIL PROTECTED]>
> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-users@lists.sourceforge.net
> Data: 18 luglio 2006 16.43.04 CEST
> Oggetto: Re: [Bacula-users] Slow backup
AIL PROTECTED]>
> A: Gabriele Bulfon <[EMAIL PROTECTED]>
> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-users@lists.sourceforge.net
> Data: 18 luglio 2006 16.43.04 CEST
> Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
>
> On Tue, 2006-07-18 at 16:37
http://www.sonicle.com
>
>
>
> --
>
> Da: Hristo Benev <[EMAIL PROTECTED]>
> A: Gabriele Bulfon <[EMAIL PROTECTED]>
> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-
> [EMAIL PROTECTED]
> Data: 18 luglio 2006
rceforge.net Data: 18 luglio 2006 16.43.04 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn Tue, 2006-07-18 at 16:37 +0200, Gabriele Bulfon wrote:
> Do you have any suggestion about parameters I may use to optimize the
> daemons?
>
I'm not a developer :( ... unfo
fon <[EMAIL PROTECTED]>
> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-
> [EMAIL PROTECTED]
> Data: 18 luglio 2006 16.32.54 CEST
> Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
>
> My opinion is that you have bottleneck somewhere (probably CPU
&g
--Da: Hristo Benev <[EMAIL PROTECTED]>A: Gabriele Bulfon <[EMAIL PROTECTED]> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-users@lists.sourceforge.net Data: 18 luglio 2006 16.32.54 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
lt;[EMAIL PROTECTED]>
> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-
> [EMAIL PROTECTED]
> Data: 18 luglio 2006 15.43.15 CEST
> Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
>
> Just to exclude network!
>
> What is the tra
ta: 18 luglio 2006 15.43.15 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RJust to exclude network!
What is the transfer rate that you can achieve with those servers?
On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote:
> Oh no. I do not use compression at all.
> And if I
[EMAIL PROTECTED]>
> A: Gabriele Bulfon <[EMAIL PROTECTED]>
> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-
> [EMAIL PROTECTED]
> Data: 18 luglio 2006 15.16.37 CEST
> Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
>
> Do you use compr
CTED]> Cc: MaxxAtWork <[EMAIL PROTECTED]> bacula-users@lists.sourceforge.net Data: 18 luglio 2006 15.16.37 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RDo you use compression, because You have difference in processing power
Sparc III is much less powerful than Opteron?
On Tu
http://www.sonicle.com
>
>
> __
>
>
> Da: MaxxAtWork <[EMAIL PROTECTED]>
> A: bacula-users@lists.sourceforge.net
> Data: 18 luglio 2006 13.22.13 CEST
> Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
>
>
> On 7/14/06, Gabrie
luglio 2006 13.22.13 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn 7/14/06, Gabriele Bulfon <[EMAIL PROTECTED]> wrote:
Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other insta
On 7/14/06, Gabriele Bulfon <[EMAIL PROTECTED]> wrote:
Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.
As you can see from the report, 60Gb
On Mon, 17 Jul 2006, Gabriele Bulfon wrote:
> Thanks,
> this is very interesting.
> My LTO2 drives (I have many installed) are from Certance.
Mine are HP drives installed in a HP MSL6000 library (aka NEO4000)
> Do you achieve these rates on a SunFire 280R?
No, Wintel hardware (HP Proliant DL580
rge.net Data: 15 luglio 2006 19.44.26 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn Fri, 14 Jul 2006, Gabriele Bulfon wrote:
> Hello,
> I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.
> These machines apperar to be very very slow
On Fri, 14 Jul 2006, Gabriele Bulfon wrote:
> Hello,
> I have some bacula installations on SunFire 280R Sparc machines, with Solaris
> 10.
> These machines apperar to be very very slow with respect to other
> installations (such as v20z) with same LTO2 device.
> As you can see from the report, 6
Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.As you can see from the report, 60Gb are copied in 9 hours, with an avarage rate of 1898.9 KB
73 matches
Mail list logo