I don't really understand what the problem is, maybe I am missing something. :-) Just use your maxscratch value on the storagepool and calculate this so you have enough space free for reclaims and db backups. You can keep the reclaim value at 30-40% or at least run it like that some hours every day (that is what I did when I still used filepools) to make sure you don't waste too much space with full volumes that don't reclaim, I think reclaiming at 65% is way to high for disk.
What I did is create a device class with a 20GB max size and set the max scratch on the pool to be the total size minus something like 25%, that should leave enough space for db backups and if you become 100% full on the storagepool you can increase the storage but before you got that sorted increase the max scratch value, it's sort of like a soft quota. Every replication session needs a volume to replicate to I think so when you do many to one and have many sessions things add up quickly, filling volumes is nothing to worry about I think. On Tue, Apr 25, 2017 at 5:02 PM, Zoltan Forray <zfor...@vcu.edu> wrote: > I do not think collocate works for a replication target server. After > spending many hours removing over 300-filling volume by hand, as soon as > replication started from 2-source servers, over 100-new filling volumes > appeared!. > > On Mon, Apr 24, 2017 at 2:29 PM, Sasa Drnjevic <sasa.drnje...@srce.hr> > wrote: > > > On 2017-04-24 19:24, Zoltan Forray wrote: > > > Collocation is also not a good choice. Since this is the replication > > > target and there are over 700-nodes, that would cause 700-filling > volumes > > > at all times. > > > > > > No, if you collocate by group. So if for example you have 8 nodes in a > > group, they would all use a single volume to fill. > > > > But, of course - it all depends on the size of the nodes, size of the > > volumes, retention period, total capacity, etc > > > > And maybe you should consider converting your file pools to directory > > container pools since you are using dedupe... But, you better upgrade > > all servers to v7.1.7.x or v8.1 first... > > > > > > Regards. > > > > -- > > Sasa Drnjevic > > www.srce.unizg.hr > > > > > > > > > > > > > > On Mon, Apr 24, 2017 at 9:41 AM, Sasa Drnjevic <sasa.drnje...@srce.hr> > > > wrote: > > > > > >> On 24.4.2017. 15:29, Zoltan Forray wrote: > > >>> On Mon, Apr 24, 2017 at 9:02 AM, Sasa Drnjevic < > sasa.drnje...@srce.hr> > > >>> wrote: > > >>> > > >>>> -are those volumes R/W ? if not, check ACTLOG > > >>>> > > >>>> -check MOUNTLimit for affected devclass(es) > > >>>> > > >>>> -check MAXSIze for affected stg pool(s) > > >>>> > > >>> > > >>> Hi Sasa, > > >>> > > >>> Thank you for the hints. > > >>> > > >>> Yes, all are R/W. > > >>> > > >>> How does MOUNTLimit and MAXSize (set to NOLIMIT) effect the fillings? > > >> > > >> > > >> In the case of disk which migrates to tape - if a disk volume is too > > >> small to hold a big file, data store process will mount and directly > use > > >> tape instead of disk... > > >> > > >> Not, sure when only seq disk used... > > >> > > >> MOUNTLimit could make trouble if it's set too low, but it doesn't seem > > >> to be that in your case... > > >> > > >> the question is why it is not reusing 0,5% filling volumes... Can you > > >> try collocation on small group of nodes? > > >> > > >> Regards. > > >> > > >> -- > > >> Sasa Drnjevic > > >> www.srce.unizg.hr > > >> > > >> > > >> > > > > > > -- > *Zoltan Forray* > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator > Xymon Monitor Administrator > VMware Administrator > Virginia Commonwealth University > UCC/Office of Technology Services > www.ucc.vcu.edu > zfor...@vcu.edu - 804-828-4807 > Don't be a phishing victim - VCU and other reputable organizations will > never use email to request that you reply with your password, social > security number or confidential personal information. For more details > visit http://infosecurity.vcu.edu/phishing.html >