Re: deleing data from a containerpool

2016-08-16 Thread Loon, Eric van (ITOPT3) - KLM
Hi Stefan!
Our database is on SSD in an IBM V3700, but the time needed for a del filespace 
can be significant though. But I totally agree, everyone who is using file 
device classes or expensive backend deduplication (like Data Domain or 
Protectier) should seriously consider switching to container pools. We are 
working on a design for our next TSM servers and we are able to lower our costs 
per TB by 75% compared to the old design based on the Data Domain!
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: dinsdag 16 augustus 2016 8:33
To: ADSM-L@VM.MARIST.EDU
Subject: Re: deleing data from a containerpool

Yes, I too have noticed this and it is something to keep in mind.
At the same time, I think almost everybody using this pool will be using SSD's 
for the database the impact will be overseeable.
But the directory containerpool is still the best thing to happen to Spectrum 
Protect since replication came along if you ask me. great performance increase 
over fileclass restores, no more stopping reclaims during the day to increase 
restore performance, no more messing with numopenvolsallowed and reclaim values 
and number of  processes to optimize daily operations and restore speed...oh, 
and compression that saves an easy 30-50% storage and license cost on top of 
the deduplication!




On Mon, Aug 15, 2016 at 11:20 AM, Loon, Eric van (ITOPT3) - KLM < 
eric-van.l...@klm.com> wrote:

> Hi all!
> After doing some extensive testing with a directory container 
> storagepool I noticed a significant change compared to the old 
> traditional storage pools.
> In a traditional storage pool TSM stores a file like an object. In 
> most cases one file is one object as far as I could see. Deleting this 
> data is very fast: a delete filespace runs rather fast because TSM 
> only has to delete the objects. So deleting a large database client 
> with multiple TBs takes a few seconds or maybe a few minutes.
> When you are using a container storage pool everything changes. Files 
> are still stored as objects, but objects are split into chunks. The 
> average size of a chuck is approx. 100 KB and TSM performs dedup on 
> this chuck level. So if you now delete a large file, TSM has to 
> inspect every chunk to see if it is unique or not. When it is unique 
> it will be deleted, otherwise not. If you delete a file which is for 
> instance 40 GB in size, TSM has to inspect around 420,000 chucks 
> before the object can be deleted. I noticed that this takes several 
> seconds to complete, so one has to take into consideration that the 
> deletion of large clients requires significant more time to complete than one 
> is used to.
> Deleing a client with little more than 1 TB of Oracle data was running 
> for more than 20 minutes. So a delete filespace for really large 
> database clients can run for hours! Has anyone else noticed this behavior too?
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
> 
> For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain 
> confidential and privileged material intended for the addressee only. 
> If you are not the addressee, you are notified that no part of the 
> e-mail or any attachment may be disclosed, copied or distributed, and 
> that any other action related to this e-mail or attachment is strictly 
> prohibited, and may be unlawful. If you have received this e-mail by 
> error, please notify the sender immediately by return e-mail, and delete this 
> message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or 
> its employees shall not be liable for the incorrect or incomplete 
> transmission of this e-mail or any attachments, nor responsible for any delay 
> in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal 
> Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with 
> registered number 33014286
> 
>

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail o

Tape predictive failure data : What do you use to read the LTO CM ?

2016-08-16 Thread Michaud, Luc [Analyste principal - environnement AIX]
Hello gents,
I'm new to the backup operations business, so bear with me pls.
I'm looking for a way to determine the health status for each of my 750 LTO4 
cartridges (700 in vault and 50 in library)
The TS3310 library is controlled directly (primary) by my TSM 7.1.1.300 (AIX) 
server, and has no partitioning nor secondary servers.
I know that all LTO drives can each the LTO CM chip, and that the ITDT command 
< U > can be used for that purpose.
So here's where I need your experience with the 3 scenarios I'm looking at :

1.   Scenario 1 : query TSM for health information for all the LTO4 volumes 
-  but PMR replied that this does not exist

a.   QUESTION : Shouldn't this type of media health management be part of 
any modern backup software ?

2.   Scenario 2 : Disable 1 drive from TSM, use a tool to mount tapes into 
that drive, use ITDT to read off the data

a.   QUESTION : Does anyone have a sample command-line tool can I use to 
mount a given LTO4 cartridge from a TS3310 into a given drive ?

3.   Scenario 3 : go around with an Android NFC-enabled device to read off 
the LTO CM (Mifare DESfire)

a.   QUESTION : Can anyone share its success story with this approach ?
I'm also quite interested into how you guys go about this, as I am about to go 
into RFP for a replacement LTO7 library.
Regards,
Luc


Re: Very weird design change SAP HANA client

2016-08-16 Thread Del Hoobler
Hi Eric,

The primary reason the ERP clients store backups as archive objects is the 
requirement to be able to group multiple independent objects into a 
logical backup set.  The ERP clients were written before the Spectrum 
Protect server implemented the grouping constructs for backups and so it 
was architected to use the archive description string as a mechanism to 
logically group multiple objects.

Del




"ADSM: Dist Stor Manager"  wrote on 08/02/2016 
05:45:52 AM:

> From: "Loon, EJ van (ITOPT3) - KLM" 
> To: ADSM-L@VM.MARIST.EDU
> Date: 08/02/2016 05:46 AM
> Subject: Re: Very weird design change SAP HANA client
> Sent by: "ADSM: Dist Stor Manager" 
> 
> Hi Del!
> Thanks again for the explanation. I will plan a meeting with our SAP
> guys and discuss with them what to do.
> Just out of curiosity: why is the DP for SAP HANA and the DP for ERP
> client creating archive files where the other TDP clients are all 
> creating backup files?
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 
> Behalf Of Del Hoobler
> Sent: maandag 1 augustus 2016 17:19
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Very weird design change SAP HANA client
> 
> Hi Eric,
> 
> I don't have much else to offer other than monitoring the SAP HANA 
> backups to ensure any failures are caught and corrected promptly. If
> you have a two week retention period, you should be able to detect 
> failed backups and react well before all backups have expired. 
> 
> It may also help if you placed a requirement against SAP directly to
> provide the enhancement as well. If they see more heat on this, it 
> could motivate them to release this sooner and specify a target date. 
> 
> Thank you,
> 
> Del
> 
> 
> 
> "ADSM: Dist Stor Manager"  wrote on 08/01/2016
> 05:03:29 AM:
> 
> > From: "Loon, EJ van (ITOPT3) - KLM" 
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 08/01/2016 05:04 AM
> > Subject: Re: Very weird design change SAP HANA client Sent by: "ADSM: 
> > Dist Stor Manager" 
> > 
> > Hi Del!
> > Thank you very much for your explanation. And sorry for blaming IBM 
> > for this. I'm really puzzled over what to do next. Like I said, 
> > implementing policy based expiration introduces the risk of losing all 

> > your backups when a client stops backing up for a certain amount of 
> > time. The option of deleting backup data through SAP HANA Studio is 
> > also not very attractive: I know the customer will do this in the 
> > beginning, but over time they will become sloppy or just forget...
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage Engineering
> > 
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> > Of Del Hoobler
> > Sent: vrijdag 29 juli 2016 19:05
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: Very weird design change SAP HANA client
> > 
> > Eric,
> > 
> > This "design change" is a "change" from the Data Protection for ERP 
> > perspective, but not from the Data Protection for ERP for SAP HANA 
> > perspective, which has always worked this way.
> > 
> > This design "change" is a result of a current limitation in the SAP 
> > HANA BACKINT API and is expected to be temporary.  This backup API 
> > streams the backup data to the DP for SAP HANA client via named pipes 
> > and today it gives no indication whether the data stream was complete 
> > or the pipe was closed prematurely due to some error.
> > 
> > We don't want to expire a prior/older backup version unless we know we 

> > have a successful new backup version so we do not currently offer 
> > expiration of backups based on version limit.  However, we do plan to 
> > provide that capability once SAP implements the enhancement to the 
> > backup API we have requested (to indicate whether or not all the data 
> > was streamed  successfully).  SAP did indicate they plan to provide 
> > that enhancement but we do not yet have a target date for that.
> > 
> > 
> > Thank you,
> > 
> > Del
> > 
> > 
> > 
> > "ADSM: Dist Stor Manager"  wrote on 07/29/2016
> > 07:23:32 AM:
> > 
> > > From: "Loon, EJ van (ITOPT3) - KLM" 
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 07/29/2016 07:24 AM
> > > Subject: Very weird design change SAP HANA client Sent by: "ADSM: 
> > > Dist
> 
> > > Stor Manager" 
> > > 
> > > Hello all!
> > > Recently we started to use the Data Protection for SAP HANA client. 
> > > I created a TSM node identical to the already existing Data 
> > > Protection
> 
> > > for SAP node and now customer reported that he received the 
> > > following error message after a successful backup:
> > > 
> > > BKI8649E: The automatic deletion of backups is not supported. Change 

> > > the value of the MAX_VERSIONS parameter to 0
> > > 
> > > I googled for the BKI86

Re: Very weird design change SAP HANA client

2016-08-16 Thread Loon, Eric van (ITOPT3) - KLM
Hi Del!
Thank you very much for the explanation!
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del 
Hoobler
Sent: dinsdag 16 augustus 2016 15:59
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Very weird design change SAP HANA client

Hi Eric,

The primary reason the ERP clients store backups as archive objects is the 
requirement to be able to group multiple independent objects into a logical 
backup set.  The ERP clients were written before the Spectrum Protect server 
implemented the grouping constructs for backups and so it was architected to 
use the archive description string as a mechanism to logically group multiple 
objects.

Del




"ADSM: Dist Stor Manager"  wrote on 08/02/2016
05:45:52 AM:

> From: "Loon, EJ van (ITOPT3) - KLM" 
> To: ADSM-L@VM.MARIST.EDU
> Date: 08/02/2016 05:46 AM
> Subject: Re: Very weird design change SAP HANA client Sent by: "ADSM: 
> Dist Stor Manager" 
> 
> Hi Del!
> Thanks again for the explanation. I will plan a meeting with our SAP 
> guys and discuss with them what to do.
> Just out of curiosity: why is the DP for SAP HANA and the DP for ERP 
> client creating archive files where the other TDP clients are all 
> creating backup files?
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Del Hoobler
> Sent: maandag 1 augustus 2016 17:19
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Very weird design change SAP HANA client
> 
> Hi Eric,
> 
> I don't have much else to offer other than monitoring the SAP HANA 
> backups to ensure any failures are caught and corrected promptly. If 
> you have a two week retention period, you should be able to detect 
> failed backups and react well before all backups have expired.
> 
> It may also help if you placed a requirement against SAP directly to 
> provide the enhancement as well. If they see more heat on this, it 
> could motivate them to release this sooner and specify a target date.
> 
> Thank you,
> 
> Del
> 
> 
> 
> "ADSM: Dist Stor Manager"  wrote on 08/01/2016
> 05:03:29 AM:
> 
> > From: "Loon, EJ van (ITOPT3) - KLM" 
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 08/01/2016 05:04 AM
> > Subject: Re: Very weird design change SAP HANA client Sent by: "ADSM: 
> > Dist Stor Manager" 
> > 
> > Hi Del!
> > Thank you very much for your explanation. And sorry for blaming IBM 
> > for this. I'm really puzzled over what to do next. Like I said, 
> > implementing policy based expiration introduces the risk of losing 
> > all

> > your backups when a client stops backing up for a certain amount of 
> > time. The option of deleting backup data through SAP HANA Studio is 
> > also not very attractive: I know the customer will do this in the 
> > beginning, but over time they will become sloppy or just forget...
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage Engineering
> > 
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 
> > Behalf Of Del Hoobler
> > Sent: vrijdag 29 juli 2016 19:05
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: Very weird design change SAP HANA client
> > 
> > Eric,
> > 
> > This "design change" is a "change" from the Data Protection for ERP 
> > perspective, but not from the Data Protection for ERP for SAP HANA 
> > perspective, which has always worked this way.
> > 
> > This design "change" is a result of a current limitation in the SAP 
> > HANA BACKINT API and is expected to be temporary.  This backup API 
> > streams the backup data to the DP for SAP HANA client via named 
> > pipes and today it gives no indication whether the data stream was 
> > complete or the pipe was closed prematurely due to some error.
> > 
> > We don't want to expire a prior/older backup version unless we know 
> > we

> > have a successful new backup version so we do not currently offer 
> > expiration of backups based on version limit.  However, we do plan 
> > to provide that capability once SAP implements the enhancement to 
> > the backup API we have requested (to indicate whether or not all the 
> > data was streamed  successfully).  SAP did indicate they plan to 
> > provide that enhancement but we do not yet have a target date for that.
> > 
> > 
> > Thank you,
> > 
> > Del
> > 
> > 
> > 
> > "ADSM: Dist Stor Manager"  wrote on 07/29/2016
> > 07:23:32 AM:
> > 
> > > From: "Loon, EJ van (ITOPT3) - KLM" 
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 07/29/2016 07:24 AM
> > > Subject: Very weird design change SAP HANA client Sent by: "ADSM: 
> > > Dist
> 
> > > Stor Manager" 
> > > 
> > > Hello all!
> > > Recently we started to use the Data Protection for SAP HANA client. 
> > > I created a TSM node i