Re: [ceph-users] Failed to clone ceph

2013-06-07 Thread Sage Weil
On Sat, 8 Jun 2013, Da Chun wrote: > Failed to clone ceph. Do you have the same problem? > > root@ceph-node7:~/workspace# git clone --recursive > https://github.com/ceph/ceph.git > Cloning into 'ceph'... > remote: Counting objects: 192874, done. > remote: Compressing objects: 100% (41154/41154), d

[ceph-users] Failed to clone ceph

2013-06-07 Thread Da Chun
Failed to clone ceph. Do you have the same problem? root@ceph-node7:~/workspace# git clone --recursive https://github.com/ceph/ceph.git Cloning into 'ceph'... remote: Counting objects: 192874, done. remote: Compressing objects: 100% (41154/41154), done. remote: Total 192874 (delta 155848), reuse

Re: [ceph-users] core dump: qemu-img info -f rbd

2013-06-07 Thread Da Chun
Yes, it works with "-f raw". “qemu-img convert” has the same problem: qemu-img convert -f qcow2 -O rbd cirros-0.3.0-x86_64-disk.img rbd:vm_disks/test_disk2 core dump qemu-img convert -f qcow2 -O raw cirros-0.3.0-x86_64-disk.img rbd:vm_disks/test_disk2 working -- Original -

Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu

2013-06-07 Thread Josh Durgin
On 06/07/2013 04:18 PM, John Nielsen wrote: On Jun 7, 2013, at 5:01 PM, Josh Durgin wrote: On 06/07/2013 02:41 PM, John Nielsen wrote: I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my

Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu

2013-06-07 Thread John Nielsen
On Jun 7, 2013, at 5:01 PM, Josh Durgin wrote: > On 06/07/2013 02:41 PM, John Nielsen wrote: >> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as >> the back-end storage. Today I was testing an update to libvirt-1.0.6 on one >> of my hosts and discovered that it includes

Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu

2013-06-07 Thread Josh Durgin
On 06/07/2013 02:41 PM, John Nielsen wrote: I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: [libvirt] [PATCH] Forbid use of ':'

[ceph-users] Setting RBD cache parameters for libvirt+qemu

2013-06-07 Thread John Nielsen
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: [libvirt] [PATCH] Forbid use of ':' in RBD pool names ...People ar

[ceph-users] subscribe

2013-06-07 Thread Simone Spinelli
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] XFS or btrfs for production systems with modern Kernel?

2013-06-07 Thread Stefan Priebe
Am 07.06.2013 16:31, schrieb Sage Weil: On Fri, 7 Jun 2013, Oliver Schulz wrote: Btrfs is the longer-term plan, but we haven't done as much testing there yet, and in particular, there is a bug in 3.9 that is triggered by a power-cycle and the fixes aren't yet backported to 3.9 stable. Until w

Re: [ceph-users] XFS or btrfs for production systems with modern Kernel?

2013-06-07 Thread Sage Weil
On Fri, 7 Jun 2013, Oliver Schulz wrote: > Hello, > > the CEPH "Hard disk and file system recommendations" page states > that XFS is the recommend OSD file system for production systems. > > Does that still hold true for the last kernels versions > (e.g. Ubuntu 12.04 with lts-raring kernel 3.8.5)

[ceph-users] XFS or btrfs for production systems with modern Kernel?

2013-06-07 Thread Oliver Schulz
Hello, the CEPH "Hard disk and file system recommendations" page states that XFS is the recommend OSD file system for production systems. Does that still hold true for the last kernels versions (e.g. Ubuntu 12.04 with lts-raring kernel 3.8.5)? Would btrfs provide a significant performance incre