In response to Ivan Adzhubey <[EMAIL PROTECTED]>: > Hi, > > I have a Linux NFS fileserver which has to be backed up to a bacula server on > another Linux box. The fileserver in question exports everything that's > needed to be backed up so all files are actually accessible on bacula server > via NFS as well. Should I run my backups via a remote bacula-fd client on the > fileserver or via local client on the bacula box (reading from NFS-mounted > tree), which method do you think will work with faster data transfers? I can > try both and benchmark them of course but would appreciate if anyone done a > similar setup already and can share experience.
It's going to depend on where resources are most available. If you run the FD on the NFS server, it will use CPU to do the compression, but will use less network bandwith. If you run the FD on the bacula server and pull the data via NFS, the Bacula server will use all the CPU to compress but more network traffic will be necessary to pull the uncompressed files through NFS. Also, if you use NFS you won't be able to take advantage of things such as filesystem snapshots. Also, depending on your NFS export settings, you may hit permissions problems. So which is best depends on which of those tradeoffs is most important to you. Also, whether or not you actually use software compression will change the balance. -- Bill Moran http://www.potentialtech.com ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users