A couple of days ago did my first production full backups.
While trying to do a test restore ran into problems.
The director ran into FreeBSD's per process memory limit of 512MB. Had to
increase the per process limit to 1.5 to get the list of files to select.
While monitoring the process I saw the director used 1.2GB of memory to
provide the list.
Does that mean the director must load in memory an entire list of the files
backed up? This particular backup was 3.2 million files.
Anyone dealing with large number of files that could share some light into
memory requirements? I may need to move the director off the server I
installed it on, if the memory requirement will go any higher than 1.5
Couldnt' the director be changed to read first just the list of directories
and then as the user does 'cd' to load files under that particular
directory? It would seem that process would be faster and use significantly
less memory.
Also, it took over an hour for the prompt to show before I could traverse
the list of backed up directories. Which is more likely to be the bottleneck
the director or the database? Right now have them on the same machine, but
could split them and put the most demanding process on a better machine.
My setup is Bacula 1.38.8, PostgreSQL 8.1.3, FreeBSD 5.4
Director, Storage daemon and PostgreSQL database in the same machine.
Machine has 2GB of RAM and it's IDEs in RAID5.
Have two faster machines coming on board soon so splitting those processes
is doable.
-------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users