Hi Stuart,
If this helps, these three lines will do it for you. I'm sure you could
rustle up a script to go through all of your images and do this for you.
rbdexportlibvirt-pool/my-server - | rbd import --image-format2-
libvirt-pool/my-server2
rbdrmlibvirt-pool/my-server
rbdmvlibvirt-pool/m
Hi,
I don't suppose anyone ever managed to look at or fix this issue with
rbd-fuse? Or does anyone know what I'm maybe doing wrong?
Best regards
Graeme
On 07/02/14 12:20, Graeme Lambert wrote:
Hi,
Does anyone know what the issue is with this?
Thanks
*Graeme*
On 06/0
Hi,
Does anyone know what the issue is with this?
Thanks
*Graeme*
On 06/02/14 13:21, Graeme Lambert wrote:
Hi all,
Can anyone advise what the problem below is with rbd-fuse? From
http://mail.blameitonlove.com/lists/ceph-devel/msg14723.html it looks
like this has happened before but
Hi all,
Can anyone advise what the problem below is with rbd-fuse? From
http://mail.blameitonlove.com/lists/ceph-devel/msg14723.html it looks
like this has happened before but should've been fixed way before now?
rbd-fuse -d -p libvirt-pool -c /etc/ceph/ceph.conf ceph
FUSE library version: 2
t;rbd cache = true" in [global] enables
it, but doesn't elaborate on whether you need to restart any Ceph processe
It's on the client side ! (so no need to restart ceph daemons)
- Mail original -
De: "Graeme Lambert"
À: ceph-users@lists.ceph.com
Envoyé: Jeudi
Hi,
I've got a few VMs in Ceph RBD that are running very slowly - presumably
down to a backfill after increasing the pg_num of a big pool.
Would RBD caching resolve that issue? If so, how do I enable it? The
documentation states that setting "rbd cache = true" in [global] enables
it, but do
Hi,
I've got 6 OSDs and I want 3 replicas per object, so following the
function that's 200 PGs per OSD, which is 1,200 overall.
I've got two RBD pools and the .rgw.buckets pool that are considerably
higher in the number of objects it has compared to the others (given
that RADOS gateway needs
Can you advise on what the issues may be?
Yehuda Sadeh wrote:
>On Wed, Jan 22, 2014 at 8:55 AM, Graeme Lambert
>wrote:
>> Hi Yehuda,
>>
>> With regards to the health status of the cluster, it isn't healthy
>but I
>> haven't found any way of fixing t
n't be anything different between them but the level of disk
read across them does seem rather high?
Best regards
Graeme
On 22/01/14 16:55, Graeme Lambert wrote:
Hi Yehuda,
With regards to the health status of the cluster, it isn't healthy but
I haven't found any way of fixing the pl
27.1587 times
cluster average (1374)
pool .rgw.buckets objects per pg (76219) is more than 55.4723 times
cluster average (1374)
Ignore the cloudstack pool, I was using cloudstack but not anymore, it's
an inactive pool.
Best regards
Graeme
On 22/01/14 16:38, Graeme Lambert wrote:
Hi,
Fol
.
Best regards
Graeme
On 22/01/14 16:28, Yehuda Sadeh wrote:
On Wed, Jan 22, 2014 at 8:05 AM, Graeme Lambert wrote:
Hi,
I'm using the aws-sdk-for-php classes for the Ceph RADOS gateway but I'm
getting an intermittent issue with the uploading files.
I'm attempting to upl
Hi,
I'm using the aws-sdk-for-php classes for the Ceph RADOS gateway but I'm
getting an intermittent issue with the uploading files.
I'm attempting to upload an array of objects to Ceph one by one using
the create_object() function. It appears to stop randomly when
attempting to do them all
12 matches
Mail list logo