Peter wrote: >May be I am not clear >Once the data reaches to storage system then during that time are there >any deduplication and data compression?
You are clear. To compress and/or deduplicate data the data must be unencrypted. Encrypted data essentially resembles “random noise,” so deduplication algorithms cannot operate on it. Nor can compression algorithms, not well anyway. Unencrypted data is much more exposed to costly data breaches and other security incidents. Moreover, sending unencrypted data to storage systems often violates regulatory and compliance standards. Do you want to be more exposed to data breaches and other security incidents than necessary? Most customers don’t want to be more exposed to data breaches and other security incidents than necessary. They’re only, or predominantly, sending already encrypted data from their servers to their storage systems. They’re using z/OS Data Set Encryption, as a notable example. And using Linux’s dm-crypt/LUKS2 encrypted volumes as another notable example. These customers handle whatever deduplication and/or compression they’re doing well before any data flows to the storage system – not in their storage device tiers. That’s also why CPACF and zEDC functions have been around for so long. But if you still want to assume these security risks then one possible approach is to use the IBM DS8K’s Transparent Cloud Tiering feature and apply deduplication and compression in the cloud object storage tier. There are a variety of cloud object storage providers with various deduplication and compression options including IBM’s own TS7700 series of virtual tape libraries as a notable example. ————— Timothy Sipples Senior Architect Digital Assets, Industry Solutions, and Cybersecurity IBM Z/LinuxONE, Asia-Pacific [email protected] ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
