On Sun, 21 Sep 2014 05:15:32 + Robin H. Johnson wrote:
> For a variety of reasons, none good anymore, we have two separate Ceph
> clusters.
>
> I would like to merge them onto the newer hardware, with as little
> downtime and data loss as possible; then discard the old hardware.
>
> Cluster
Hello,
On Fri, 19 Sep 2014 18:29:02 -0700 Craig Lewis wrote:
> I'm personally interested in running Ceph on some RAID-Z2 volumes with
> ZILs. XFS feels really dates after using ZFS. I need to check the
> progress, but I'm thinking of reformatting one node once Giant comes out.
>
I'm looking f
For a variety of reasons, none good anymore, we have two separate Ceph
clusters.
I would like to merge them onto the newer hardware, with as little
downtime and data loss as possible; then discard the old hardware.
Cluster A (2 hosts):
- 3TB of S3 content, >100k files, file mtimes important
- <50
Excellent thank you for your response
Sage Weil writes:
>
> Eventually, yes, but right now only 2 levels are supported.
>
> There is a blueprint, see
>
>
http://wiki.ceph.com/Planning/Blueprints/Emperor/osd%3A_tiering%3A_objec
t_redirects
>
> sage
>
___