On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza <ad...@redhat.com> wrote:
> On Fri, Jun 16, 2017 at 2:11 PM, Warren Wang - ISD
> <warren.w...@walmart.com> wrote:
>> I would prefer that this is something more generic, to possibly support 
>> other backends one day, like ceph-volume. Creating one tool per backend 
>> seems silly.
>>
>> Also, ceph-lvm seems to imply that ceph itself has something to do with lvm, 
>> which it really doesn’t. This is simply to deal with the underlying disk. If 
>> there’s resistance to something more generic like ceph-volume, then it 
>> should at least be called something like ceph-disk-lvm.
>
> Sage, you had mentioned the need for "composable" tools for this, and
> I think that if we go with `ceph-volume` we could allow plugins for
> each strategy. We are starting with `lvm` support so that would look
> like: `ceph-volume lvm`
>
> The `lvm` functionality could be implemented as a plugin itself, and
> when we start working with supporting regular disks, then `ceph-volume
> disk` can come along, etc...
>
> It would also open the door for anyone to be able to write a plugin to
> `ceph-volume` to implement their own logic, while at the same time
> re-using most of what we are implementing today: logging, reporting,
> systemd support, OSD metadata, etc...
>
> If we were to separate these into single-purpose tools, all those
> would need to be re-done.

Couple of thoughts:
 - let's keep this in the Ceph repository unless there's a strong
reason not to -- it'll enable the tool's branching to automatically
happen in line with Ceph's.
 - I agree with others that a single entrypoint (i.e. executable) will
be more manageable than having conspicuously separate tools, but we
shouldn't worry too much about making things "plugins" as such -- they
can just be distinct code inside one tool, sharing as much or as
little as they need.

What if we delivered this set of LVM functionality as "ceph-disk lvm
..." commands to minimise the impression that the tooling is changing,
even if internally it's all new/distinct code?

At the risk of being a bit picky about language, I don't like calling
this anything with "volume" in the name, because afaik we've never
ever called OSDs or the drives they occupy "volumes", so we're
introducing a whole new noun, and a widely used (to mean different
things) one at that.

John

>
>
>>
>> 2 cents from one of the LVM for Ceph users,
>> Warren Wang
>> Walmart ✻
>>
>> On 6/16/17, 10:25 AM, "ceph-users on behalf of Alfredo Deza" 
>> <ceph-users-boun...@lists.ceph.com on behalf of ad...@redhat.com> wrote:
>>
>>     Hello,
>>
>>     At the last CDM [0] we talked about `ceph-lvm` and the ability to
>>     deploy OSDs from logical volumes. We have now an initial draft for the
>>     documentation [1] and would like some feedback.
>>
>>     The important features for this new tool are:
>>
>>     * parting ways with udev (new approach will rely on LVM functionality
>>     for discovery)
>>     * compatibility/migration for existing LVM volumes deployed as 
>> directories
>>     * dmcache support
>>
>>     By documenting the API and workflows first we are making sure that
>>     those look fine before starting on actual development.
>>
>>     It would be great to get some feedback, specially if you are currently
>>     using LVM with ceph (or planning to!).
>>
>>     Please note that the documentation is not complete and is missing
>>     content on some parts.
>>
>>     [0] http://tracker.ceph.com/projects/ceph/wiki/CDM_06-JUN-2017
>>     [1] http://docs.ceph.com/ceph-lvm/
>>     _______________________________________________
>>     ceph-users mailing list
>>     ceph-users@lists.ceph.com
>>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to