Hi Zoltan, The large number of objects is normal for system state file spaces. System state backup uses grouping, with each backed up object being a member of the group. If the same object is included in multiple groups, then it will be counted more than once. Each system state backup creates a new group, so as the number of retained backup versions grows, so does the number of groups, and thus the total object count can grow very large.
Best regards, Andy ____________________________________________________________________________ Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26 10:30:04: > From: Zoltan Forray <zfor...@vcu.edu> > To: ADSM-L@VM.MARIST.EDU > Date: 2019-02-26 10:30 > Subject: Re: Bottomless pit > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> > > Just found another node with a similar issue on a different ISP server with > different software levels (client=7.1.4.4 and OS=Windows 2012R2). The node > name is the same so I think the application is, as well. > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL \System > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB: *129,785,134 > objects deleted*. > > > On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic <sasa.drnje...@srce.hr> wrote: > > > On 26.2.2019. 15:01, Zoltan Forray wrote: > > > Since all of these systemstate deletes crashed/failed, I restarted them > > and > > > 2-of the 3 are already up to 5M objects after running for 30-minutes. > > Will > > > this ever end successfully? > > > > All of mine did finish successfully... > > > > But, none of them had more than 25 mil files deleted. > > > > Wish you luck ;-) > > > > Rgds, > > > > -- > > Sasa Drnjevic > > www.srce.unizg.hr/en/ > > > > > > > > > > > > > > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <sasa.drnje...@srce.hr> > > wrote: > > > > > >> FYI, > > >> same here...but my range/ratio was: > > >> > > >> ~2 mil occ to 25 mil deleted objects... > > >> > > >> Never solved the mystery... gave up :-> > > >> > > >> > > >> -- > > >> Sasa Drnjevic > > >> www.srce.unizg.hr/en/ > > >> > > >> > > >> > > >> > > >> On 2019-02-25 20:05, Zoltan Forray wrote: > > >>> Here is a new one....... > > >>> > > >>> We turned off backing up SystemState last week. Now I am going through > > >> and > > >>> deleted the Systemstate filesystems. > > >>> > > >>> Since I wanted to see how many objects would be deleted, I did a "Q > > >>> OCCUPANCY" and preserved the file count numbers for all Windows nodes > > on > > >>> this server. > > >>> > > >>> For 4-nodes, the delete of their systemstate filespaces has been > > running > > >>> for 5-hours. A "Q PROC" shows: > > >>> > > >>> 2019-02-25 08:52:05 Deleting file space > > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1) > > >> (backup > > >>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*. > > >>> > > >>> Considering the occupancy for this node was *~5-Million objects*, how > > has > > >>> it deleted *105-Million* objects (and counting). The other 3-nodes in > > >>> question are also up to *>100-Million objects deleted* and none of them > > >> had > > >>> more than *6M objects* in occupancy? > > >>> > > >>> At this rate, the deleting objects count for 4-nodes systemstate will > > >>> exceed 50% of the total occupancy objects on this server that houses > > the > > >>> backups for* 263-nodes*? > > >>> > > >>> I vaguely remember some bug/APAR about systemstate backups being > > >>> large/slow/causing performance problems with expiration but these nodes > > >>> client levels are fairly current (8.1.0.2 - staying below the > > >> 8.1.2/SSL/TLS > > >>> enforcement levels) and the ISP server is 7.1.7.400. All of these are > > >>> Windows 2016, if that matters. > > >>> > > >>> -- > > >>> *Zoltan Forray* > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator > > >>> Xymon Monitor Administrator > > >>> VMware Administrator > > >>> Virginia Commonwealth University > > >>> UCC/Office of Technology Services > > >>> www.ucc.vcu.edu > > >>> zfor...@vcu.edu - 804-828-4807 > > >>> Don't be a phishing victim - VCU and other reputable organizations will > > >>> never use email to request that you reply with your password, social > > >>> security number or confidential personal information. For more details > > >>> visit https://urldefense.proofpoint.com/v2/url? > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx- > siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e= > > >>> > > >> > > > > > > > > > -- > > > *Zoltan Forray* > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator > > > Xymon Monitor Administrator > > > VMware Administrator > > > Virginia Commonwealth University > > > UCC/Office of Technology Services > > > www.ucc.vcu.edu > > > zfor...@vcu.edu - 804-828-4807 > > > Don't be a phishing victim - VCU and other reputable organizations will > > > never use email to request that you reply with your password, social > > > security number or confidential personal information. For more details > > > visit https://urldefense.proofpoint.com/v2/url? > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx- > siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e= > > > > > > > > -- > *Zoltan Forray* > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator > Xymon Monitor Administrator > VMware Administrator > Virginia Commonwealth University > UCC/Office of Technology Services > www.ucc.vcu.edu > zfor...@vcu.edu - 804-828-4807 > Don't be a phishing victim - VCU and other reputable organizations will > never use email to request that you reply with your password, social > security number or confidential personal information. For more details > visit https://urldefense.proofpoint.com/v2/url? > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx- > siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e= >