Michael Galloway wrote: > happy new year all! > > my backups of network appliance nfs mounts has gotten intolerable. i have 3 > FAS250's > i'm working with: masspec, birch, aspen. all are connected to the same switch > via > gigE network, with single hops to the bacula server (2.2.6 patched). first > full i did > was this one: > > Job: mspec.2007-12-18_21.29.07 > Backup Level: Full > Client: "molbio-fd" 2.2.6 (10Nov07) > x86_64-unknown-linux-gnu,redhat, > FileSet: "Mspec Set" 2007-12-18 20:45:51 > Pool: "Full" (From Job resource) > Storage: "LTO4" (From Job resource) > Scheduled time: 18-Dec-2007 21:29:24 > Start time: 18-Dec-2007 21:53:40 > End time: 19-Dec-2007 15:55:36 > Elapsed time: 18 hours 1 min 56 secs > Priority: 10 > FD Files Written: 863,458 > SD Files Written: 863,458 > FD Bytes Written: 1,825,660,355,131 (1.825 TB) > SD Bytes Written: 1,825,879,267,061 (1.825 TB) > Rate: 28123.4 KB/s > Software Compression: None > VSS: no > Encryption: no > Volume name(s): 002045L4|002042L4 > Volume Session Id: 3 > Volume Session Time: 1198028560 > Last Volume Bytes: 960,677,286,912 (960.6 GB) > Non-fatal FD errors: 0 > SD Errors: 0 > FD termination status: OK > SD termination status: OK > Termination: Backup OK > > adequate backup rate of 28MB/s. the next filer that got a full was aspen: > > Job: aspen.2007-12-20_23.25.27 > Backup Level: Full (upgraded from Incremental) > Client: "molbio-fd" 2.2.6 (10Nov07) > x86_64-unknown-linux-gnu,redhat, > FileSet: "Aspen Set" 2007-12-20 23:25:00 > Pool: "Inc" (From Run pool override) > Storage: "LTO4" (From Job resource) > Scheduled time: 20-Dec-2007 23:25:00 > Start time: 20-Dec-2007 23:25:02 > End time: 21-Dec-2007 23:02:33 > Elapsed time: 23 hours 37 mins 31 secs > Priority: 10 > FD Files Written: 8,069,999 > SD Files Written: 8,069,999 > FD Bytes Written: 990,743,048,680 (990.7 GB) > SD Bytes Written: 992,311,396,359 (992.3 GB) > Rate: 11648.8 KB/s > Software Compression: None > VSS: no > Encryption: no > Volume name(s): 002040L4|002049L4 > Volume Session Id: 16 > Volume Session Time: 1198028560 > Last Volume Bytes: 15,757,378,560 (15.75 GB) > Non-fatal FD errors: 0 > SD Errors: 0 > FD termination status: OK > SD termination status: OK > Termination: Backup OK > > slower at 12MB/s but still tolerable. the last started on christmas day, > birch: > > Job: birch.2007-12-25_16.47.08 > Backup Level: Full > Client: "molbio-fd" 2.2.6 (10Nov07) > x86_64-unknown-linux-gnu,redhat, > FileSet: "Birch Set" 2007-12-22 09:56:56 > Pool: "Full" (From Job resource) > Storage: "LTO4" (From Job resource) > Scheduled time: 25-Dec-2007 16:47:11 > Start time: 25-Dec-2007 16:47:25 > End time: 31-Dec-2007 23:09:01 > Elapsed time: 6 days 6 hours 21 mins 36 secs > Priority: 10 > FD Files Written: 16,679,881 > SD Files Written: 16,679,881 > FD Bytes Written: 1,105,891,427,122 (1.105 TB) > SD Bytes Written: 1,108,951,797,447 (1.108 TB) > Rate: 2043.0 KB/s > Software Compression: None > VSS: no > Encryption: no > Volume name(s): 000299L4 > Volume Session Id: 5 > Volume Session Time: 1198587778 > Last Volume Bytes: 1,504,677,113,856 (1.504 TB) > Non-fatal FD errors: 0 > SD Errors: 0 > FD termination status: OK > SD termination status: OK > Termination: Backup OK > > not acceptable at 2MB/s. i cannot find any real difference in the network > config or nfs mount > config on these filesystms. i suspect it has to do with the nature of the > filesystems. masspec > has less than a millon files, aspen has around 8 million files and birch has > nearly 17 million. > > has anyone had similar experience working with nfs backups of this nature? > anything i can do > to improve performance to get the filer backed up in a reasonable time window?
With the purpose of gathering facts: are these results repeatable? -- Dan Langille BSDCan - The Technical BSD Conference : http://www.bsdcan.org/ PGCon - The PostgreSQL Conference: http://www.pgcon.org/ ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users