Hi all,

We're exploring solutions to offload large volumes of data (on the order of petabytes) from our NetApp all-flash storage to our more cost-effective, HDD-based Ceph storage cluster, based on criteria such as: last access time older than X years.

Ideally, we would like to leave behind a 'stub' or placeholder file on the NetApp side to preserve the original directory structure and potentially enable some sort of transparent access or recall if needed. This kind of setup is commonly supported by solutions like DataCore/FileFly, but as far as we can tell, FileFly doesn’t support Ceph as a backend and instead favors its own Swarm object store.

Has anyone here implemented a similar tiering/archive/migration solution involving NetApp and Ceph?

We’re specifically looking for:

*    Enterprise-grade tooling

*    Stub file support or similar metadata-preserving offload

* Support and reliability (given the scale, we can’t afford data loss or inconsistency)

*    Either commercial or well-supported open source solutions

Any do’s/don’ts, war stories, or product recommendations would be greatly appreciated. We’re open to paying for software or services if it brings us the reliability and integration we need.

Thanks in advance!

MJ
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to