On Sat, 8 Jun 2013, Da Chun wrote:
> Failed to clone ceph. Do you have the same problem?
>
> root@ceph-node7:~/workspace# git clone --recursive
> https://github.com/ceph/ceph.git
> Cloning into 'ceph'...
> remote: Counting objects: 192874, done.
> remote: Compressing objects: 100% (41154/41154), d
Failed to clone ceph. Do you have the same problem?
root@ceph-node7:~/workspace# git clone --recursive
https://github.com/ceph/ceph.git
Cloning into 'ceph'...
remote: Counting objects: 192874, done.
remote: Compressing objects: 100% (41154/41154), done.
remote: Total 192874 (delta 155848), reuse
Yes, it works with "-f raw".
“qemu-img convert” has the same problem:
qemu-img convert -f qcow2 -O rbd cirros-0.3.0-x86_64-disk.img
rbd:vm_disks/test_disk2
core dump
qemu-img convert -f qcow2 -O raw cirros-0.3.0-x86_64-disk.img
rbd:vm_disks/test_disk2
working
-- Original -
On 06/07/2013 04:18 PM, John Nielsen wrote:
On Jun 7, 2013, at 5:01 PM, Josh Durgin wrote:
On 06/07/2013 02:41 PM, John Nielsen wrote:
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the
back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my
On Jun 7, 2013, at 5:01 PM, Josh Durgin wrote:
> On 06/07/2013 02:41 PM, John Nielsen wrote:
>> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as
>> the back-end storage. Today I was testing an update to libvirt-1.0.6 on one
>> of my hosts and discovered that it includes
On 06/07/2013 02:41 PM, John Nielsen wrote:
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the
back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my
hosts and discovered that it includes this change:
[libvirt] [PATCH] Forbid use of ':'
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the
back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my
hosts and discovered that it includes this change:
[libvirt] [PATCH] Forbid use of ':' in RBD pool names
...People ar
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Am 07.06.2013 16:31, schrieb Sage Weil:
On Fri, 7 Jun 2013, Oliver Schulz wrote:
Btrfs is the longer-term plan, but we haven't done as much testing there
yet, and in particular, there is a bug in 3.9 that is triggered by a
power-cycle and the fixes aren't yet backported to 3.9 stable. Until w
On Fri, 7 Jun 2013, Oliver Schulz wrote:
> Hello,
>
> the CEPH "Hard disk and file system recommendations" page states
> that XFS is the recommend OSD file system for production systems.
>
> Does that still hold true for the last kernels versions
> (e.g. Ubuntu 12.04 with lts-raring kernel 3.8.5)
Hello,
the CEPH "Hard disk and file system recommendations" page states
that XFS is the recommend OSD file system for production systems.
Does that still hold true for the last kernels versions
(e.g. Ubuntu 12.04 with lts-raring kernel 3.8.5)?
Would btrfs provide a significant performance incre
11 matches
Mail list logo