On Wed, Sep 4, 2013 at 2:24 AM, Jens-Christian Fischer
<jens-christian.fisc...@switch.ch> wrote:
> Hi Greg
>
>> If you saw your existing data migrate that means you changed its
>> hierarchy somehow. It sounds like maybe you reorganized your existing
>> nodes slightly, and that would certainly do it (although simply adding
>> single-node higher levels would not). It's also possible that you
>> introduced your SSD devices/hosts in a way that your existing data
>> pool rules believed they should make use of them (if, for instance,
>> your data pool rule starts out at root and you added your SSDs
>> underneath there). What you'll want to do is add a whole new root for
>> your SSD nodes, and then make the SSD pool rule (and only that rule)
>> start out there.
>
> And that is the problem: The SSDs are in the same physical servers as the 
> SATA drives. Adding them to the hosts adds them into the hierarchy. Adding 
> them to "virtual hosts" (a host name that doesn't exist) breaks the startup 
> scripts.
>
> Can I add the SSD OSDs directly to a new root without having them in the host 
> hierarchy?
>
> If you have a solution that solves either of these problems, I'm all ears :)

"breaks the startup scripts?" They will generally place the OSD in a
bucket named after the host, but you can override that behavior by
specifying "osd crush update on start = false" (so it won't change
their crush location) and then placing them in the hierarchy wherever
you like
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to