> On my little windows TSM server (with 2G of RAM), I am being killed by 12 to > 19 GIGABYTE files. > > There are some that 'naturally' come from our Exchange servers (4 per day). > But there seem to be to many of them for that to be the only source. > > Knowing my user data, thinking that these are really aggregates seems > reasonable to me. > > Is there a way to control the maximum size of an aggregate? > > If the group has a better idea, I am always open to suggestions!
I have never seen an aggregate reach the kind of size you describe. At our site we have two major sources of multi-gigabyte backup files. One is database backups. The other is TSM backups of output files from other backup products. People will run daily backups of systems without TSM coverage and send the output over the network to disks on systems with TSM coverage. If I see a TSM process moving one of the monster files I can usually run a 'query content' command on the input or output volume to obtain the node, directory path, and name of the file. This usually gives me a pretty good idea who to ask about the file.