On 06/17/2010 06:29 PM, ekke85 wrote:
>
> Hi
>
> This is more for the record then anything else. I manage to sort this out by
> the looks of things.
> First off I thought it must be something with the server so I replaced the
> server with an IBM xSiers 346 which has 2 quad core cpu's(Intel(R)
Hi
This is more for the record then anything else. I manage to sort this out by
the looks of things.
First off I thought it must be something with the server so I replaced the
server with an IBM xSiers 346 which has 2 quad core cpu's(Intel(R) Xeon(TM) CPU
2.80GHz) and 8GB of RAM. Installed Bac
Alan Brown wrote:
> On 14/06/10 20:39, richard wrote:
>> For info:
>>
>> I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
>> specced machine. There is no data spooling , (although attribute
>> spooling is used), - backup is directly off a local RAID5.
>>
>> bacula 5.0.2 linux x8
On 14/06/10 20:39, richard wrote:
> For info:
>
> I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
> specced machine. There is no data spooling , (although attribute
> spooling is used), - backup is directly off a local RAID5.
>
> bacula 5.0.2 linux x86_64.
>
How much ram/swa
On Mon, Jun 14, 2010 at 3:39 PM, richard wrote:
> For info:
>
> I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
> specced machine. There is no data spooling , (although attribute
> spooling is used), - backup is directly off a local RAID5.
>
I think the only way to figure thi
For info:
I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
specced machine. There is no data spooling , (although attribute
spooling is used), - backup is directly off a local RAID5.
bacula 5.0.2 linux x86_64.
Regards,
Richard
-
>
> Honestly, if you have that large of a backup set and the individual
> files can be in the terabyte range, you might want to reconsider whether
> spooling is the correct approach. Remember, the primary purposes of
> spooling are to allow a fast dump to disk of small backup sets followed
> by
On 06/14/10 13:23, Alan Brown wrote:
> On 14/06/10 17:06, ekke85 wrote:
>> I have 710 files in that directory/nfs share/fileset. There is only
>> one or two files that is over 1TB, I might try to backup one of those
>> files on its own and see what it does.
>>
>>> how many are there already in th
On 14/06/10 17:06, ekke85 wrote:
> I have 710 files in that directory/nfs share/fileset. There is only
> one or two files that is over 1TB, I might try to backup one of those
> files on its own and see what it does.
>
>> how many are there already in the database?
>>
>>
> I am no expert wit
On Mon, Jun 14, 2010 at 12:06 PM, ekke85 wrote:
>
>
>>
>> How many?
>>
>
> I have 710 files in that directory/nfs share/fileset. There is only one or
> two files that is over 1TB, I might try to backup one of those files on its
> own and see what it does.
>
>>
>> how many are there already in th
>
> How many?
>
I have 710 files in that directory/nfs share/fileset. There is only one or two
files that is over 1TB, I might try to backup one of those files on its own and
see what it does.
>
> how many are there already in the database?
>
I am no expert with the DB side, but I have 1
ekke85 wrote:
> There is a lot of files that i need to backup.
How many?
how many are there already in the database?
> The other problem that I think might cause it, is that some of the files is
> 1.1TB on its own.
Does anyone on the list know what bacula tries to do if a file being
backed
>
> How many files are in the 11Tb and why can't you break it into smaller
> chunks?
>
> Depending on the size of your mysql database, you could quite easily run
> out of ram+swap with a large dataset.
>
> "top" is a useful tool - especially its "M" function.
>
There is a lot of files that
On Fri, 11 Jun 2010, ekke85 wrote:
> Thanks for the fast replies and sorry for my slow response. It seems my
> spooling was not working right, but I have it working now. I am not sure
> what uses the memory up, but here is my configs:
How many files are in the 11Tb and why can't you break it into
> On Fri, 11 Jun 2010 04:54:53 -0400, ekke85 said:
>
> Thanks for the fast replies and sorry for my slow response. It seems my
> spooling was not working right, but I have it working now. I am not sure
> what uses the memory up, but here is my configs:
> ...
> I still have the box running out
Hi
Thanks for the fast replies and sorry for my slow response. It seems my
spooling was not working right, but I have it working now. I am not sure what
uses the memory up, but here is my configs:
CPU = 2 x Dual Core Intel(R) Xeon(TM) CPU 2.40GHz
Memory = 4GB
Swap = 6GB
The server is connected
On 06/09/2010 05:29 PM, ekke85 wrote:
>
> Hi
>
> I have a Scalar i500 library with 5 drives and 135 slots. I have a Red Hat 5
> server with a 1gb nic. The setup works fine for backups on most systems. The
> problem I have is a NFS share with 11TB of data that I need to backup. Every
> time I r
> I have a Scalar i500 library with 5 drives and 135 slots. I have a Red Hat 5
> server with a 1gb nic. The setup works fine for backups on most systems. The
> problem I have is a NFS share with 11TB of data that I need to backup. Every
> time I run this job it will write about 600GB of data to
Am Wed, 09 Jun 2010 11:29:37 -0400 schrieb ekke85:
> Hi
>
> I have a Scalar i500 library with 5 drives and 135 slots. I have a Red
> Hat 5 server with a 1gb nic. The setup works fine for backups on most
> systems. The problem I have is a NFS share with 11TB of data that I need
> to backup. Every
Hi
I have a Scalar i500 library with 5 drives and 135 slots. I have a Red Hat 5
server with a 1gb nic. The setup works fine for backups on most systems. The
problem I have is a NFS share with 11TB of data that I need to backup. Every
time I run this job it will write about 600GB of data to a t
20 matches
Mail list logo