Hello Michael,

1.  Perhaps I'm misunderstanding, but can Ceph present a SCSI interface?  I
don't understand how that would help with reducing the size of the rbd.

4.  Heh. Tell Me about it [3].  But based on that experience, it *seemed* like
I could read ok on the different nodes where the rbd was mounted.  Guess
there's only one way to find out.

Thanks for your feedback!

Best Regards,
Jon A

[3]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-May/001913.html


On Sun, Oct 20, 2013 at 10:26 PM, Michael Lowe <j.michael.l...@gmail.com>wrote:

> 1. How about enabling trim/discard support in virtio-SCSI and using
> fstrim?  That might work for you.
>
> 4.  Well you can mount them rw in multiple vm's with predictably bad
> results, so I don't see any reason why you could not specify ro as a mount
> option and do ok.
>
> Sent from my iPad
>
> On Oct 21, 2013, at 12:09 AM, Jon <three1...@gmail.com> wrote:
>
> Hello,
>
> Are there any current Perl modules for Ceph?  I found a thread [1] from
> 2011 with a version of Ceph::RADOS, but it only has functions to deal with
> pools, and the ->list_pools function causes a seg. fault.
>
> I'm interested in controlling Ceph via script / application and I was
> wondering [hoping] if anyone else had a current module before I go
> reinventing the wheel.  (My wheel would likely leverage calls to system()
> and use the rbd/rados/ceph functions directly initially...  I'm not
> proficient with C/XS)
>
> I've been primarily using OpenNebula, though I've evaluated OpenStack,
> CloudStack, and even Eucalyptus and they all seem to meet ($x-1)/$x
> criteria (one project seems to do one thing better than another, but they
> all are missing one feature that another project has--this is a
> generalization, but this isn't the OpenNebula mailing list).  What I'm
> looking to do at the moment is simplify my lab deployments.  My current
> workflow only takes 10 minutes or so to deploy a new vm:
>
> 1) dump xml of existing vm (usually the "base" vm that the template was
> created from, I actually have a "template" that I just copy and modify now)
> 2) clone rbd to new vm (usually using vmname)
> 3) edit vm template to relflect new values
>    -- change name of vm to the new vmname
>    -- remove specific identifiers (MAC, etc. unnecessary when copying
> "template")
>    -- update disk to reflect new rbd
> 4) login to console and "pre-provision" vm
>   -- update system
>   -- assign hostname
>   -- generate ssh-keys (I remove the sshd host keys when "sysprepping" for
> cloning, ubuntu I know for sure doesn't regenerate the keys on boot, I
> _THINK_ RHEL might)
>
> I actually already did this work on automating deployments[2], but that
> was back when I was primarily using qcow2 images.  It leverages guestfish
> to do all of the vm :management" (setting IP, hostname, generating ssh host
> keys, etc).  But now I want to leverage my Ceph cluster for images.
>
> Couple of tangentially related questions that I don't think warrant a
> whole thread:
>
> 1) Is it possible to zero and compress rbds?  (I like to use virt-sysprep
> and virt-sparcify to prepare my images, then, when I was using qcow images,
> I would compress them before cloning)
> 2)  has anyone used virt-sysprep|virt-sparcify against rbd images?  I
> suppose if I'm creating a template image, I could create the qcow image
> then convert it to an rbd, but qcow-img creates format 1 images.
> 3) anyone know of a way to create format 2 images with qemu-img?  When I
> specify -O rbd qemu-img seg faults, and rbd2 is an invalid format.
> 4) Is it possible to mount an RBD to multiple vms as readonly?  I'm
> thinking like readonly iso images converted to rbds? (is it even possible
> to convert an iso to an image?)
>
>
> Thanks for your help.
>
> Best Regards,
> Jon A
>
> [1]  http://www.spinics.net/lists/ceph-devel/msg04147.html
> [2]  https://github.com/three18ti/PrepVM-App
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to