Hello,

2013/1/3 Gary R. Schmidt <g...@mcleod-schmidt.id.au>

> > Keep in mind that ZFS block-level deduplication is very expensive in
> > terms of RAM.
> I'd put it as *any* block-level de-duplication is RAM intensive... I'd
> be very interested if someone had an algorithm that was not RAM
> intensive - and didn't take forever to accept data! :-)


If you proper design a deduplication storage engine then you can do it
without a lot of RAM.
We've got only 15% of performance loss in our lab. I think it is not so bad
and it is faster then standard gzip compression. In this case the main
performance problem is a client reading a data not a storage daemon
handling dedupes.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
------------------------------------------------------------------------------
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to