Not a full answer to what you’re looking for, but something adjacent:

https://ceph.io/en/news/blog/2025/rgw-tiering-enhancements-part1/


https://ceph.io/en/news/blog/2025/rgw-tiering-enhancements-part2/ 
 <https://ceph.io/en/news/blog/2025/rgw-tiering-enhancements-part2/>

I’m not sure if/when this will appear in a Squid release — maybe it’s already 
there —  but I have to imagine it will be in Tentacle. 

Perhaps you could invert the usual tiering logic and have your clients point at 
the Ceph RGW endpoint, but I suspect that wouldn’t meet you performance needs.

However, here https://www.netapp.com/data-services/tiering/ I see a NetApp 
feature called BlueXP that appears to do this sort of tiering.  The description 
is a big vague, but it speaks of cloud object storage and mentions AWS, so it 
seems quite plausible that it could be pointed at an RGW endpoint for tiering.






> 
> 
> Hi all,
> 
> We're exploring solutions to offload large volumes of data (on the order of 
> petabytes) from our NetApp all-flash storage to our more cost-effective, 
> HDD-based Ceph storage cluster, based on criteria such as: last access time 
> older than X years.
> 
> Ideally, we would like to leave behind a 'stub' or placeholder file on the 
> NetApp side to preserve the original directory structure and potentially 
> enable some sort of transparent access or recall if needed. This kind of 
> setup is commonly supported by solutions like DataCore/FileFly, but as far as 
> we can tell, FileFly doesn’t support Ceph as a backend and instead favors its 
> own Swarm object store.
> 
> Has anyone here implemented a similar tiering/archive/migration solution 
> involving NetApp and Ceph?
> 
> We’re specifically looking for:
> 
> *    Enterprise-grade tooling
> 
> *    Stub file support or similar metadata-preserving offload
> 
> *    Support and reliability (given the scale, we can’t afford data loss or 
> inconsistency)
> 
> *    Either commercial or well-supported open source solutions
> 
> Any do’s/don’ts, war stories, or product recommendations would be greatly 
> appreciated. We’re open to paying for software or services if it brings us 
> the reliability and integration we need.
> 
> Thanks in advance!
> 
> MJ
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to