Hi guys,
Has anyone got any experience using bacula with the Storagetek SL-3000? Or the
SL-150 for that matter (they should behave the same way for what I'm told)
I'm looking for real-life experience here, not something like "if it can be
managed with mtx, then it _should_ work.."
Thanks in a
Thanks Davide, that's excellent news.
If anyone else has got anything to add, please do.
/tony
+--
|This was sent by tony.alb...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
Thanks everyone, sounds like it's going to work fine.
Now I just need to find out if we can also use the T1D drives we're
considering..
/tony
+--
|This was sent by tony.alb...@gmail.com via Backup Central.
|Forward SPAM to
Hi guys,
I've created a sdfs mount, but when I do backups using bacula to a file device
on the sdfs mount, the dedupe ratio is almost 0, I've run 3 full backups from
the same client, each job is about 2 GB:
[root@dkarhbus02 d0-sdfs]# sdfscli --volume-info
Volume Capacity : 1.5 TB
Volume Curren
Hi John,
Thanks for the input. Compression is not enabled though.
/tony
+--
|This was sent by tony.alb...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+-
Blocksize is set to 32k.
Well thanks guys, I guess I'll go with mhvtl and lzo compression then.
/tony
+--
|This was sent by tony.alb...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+---
Hi everyone,
Does anyone know whether bacula can be made to work on a block-level dedupe
storage system? Are there any plans to support this technology?
The reason I ask is that I've gotten a few requests from people who have
devices like Quantum DXi's or DataDomain, and these both do block-lev
Ok, it seems to me that there is some confusion of which kind of dedupe I'm
referring to.
What I'm talking about is variable-length block-based target dedupe, that is,
the DXi or DataDomain box receives all data from the client via the backup
server (bacula SD) and then handles the deduplicatio
Hi Silver,
The downside of client-side dedupe is actually a combination of several things.
1. The client does the dedupe calculations etc. this means that it takes CPU
resources on the client which it might not have.
2. Sometimes, client-side dedupe is chosen because of bandwidth limitations to
Spooling is definetely off.
using tar to dump the directory 3 times now.
1. Before first tar:
[root@dkarhbus02 download]# sdfscli -volume-info
Volume Capacity : 1.5 TB
Volume Current Size : 681 B
Volume Max Percentage Full : Unlimited
Volume Duplicate Data Written : 0 B
Volume Unique Data Writte
24-Feb 15:33 scorpion-fd JobId 3: Fatal error: Failed to connect to Storage daem
on: viper.oleo.co.uk:9103
When working with UNIX based systems, read the log files, read them again and
then suddenly it all shows up :o)
You'll get used to it.
/tony
+
Hi all,
Simone, are the RHEL 6 packages compiled with mysql support? Whenever I try to
start the director, i get this message in the log file:
22-Jan 17:43 bacula-dir JobId 0: Fatal error: postgresql.c:241 Unable to
connect to PostgreSQL server. Database=bacula User=bacula
Possible causes: SQL
please have a look at the readme file at:
http://repos.fedorapeople.org/repos/slaanesh/bacula/README.txt
there's this note:
** The included /usr/share/doc/bacula-common-%{version}/README.Fedora contains
quick installation instructions and notes **
You'll find your quick answer by reading it.
13 matches
Mail list logo