Thanks, Josh. I will give your suggestion a try with multiple cinder-volume
instances though I am still not sure if cinder-scheduler is smart enough to
know which instance an API request should be routed to when volume-type is
specified.
--weiguo
> Date: Fri, 28 Jun 2013 14:10:12 -0700
> From:
I have some questions regarding Ceph running on CentOS since I'm
considering going that route rather than Ubuntu 12.04 for my Ceph cluster
rebuild (moving from Arch to CentOS 6.4 in-place).
First I had noticed
http://ceph.com/docs/next/install/os-recommendations/ doesn't
flag CentOS 6.3 with note
On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote:
>
> On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
>
>> Oh, that's out of date! PG splitting is supported in Cuttlefish:
>> "ceph osd pool set pg_num "
>> http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
>
> Ah, so:
> pg_num:
On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
> Oh, that's out of date! PG splitting is supported in Cuttlefish:
> "ceph osd pool set pg_num "
> http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
Ah, so:
pg_num: The placement group number.
means
pg_num: The number of place
On Mon, Jul 1, 2013 at 9:15 AM, Alex Bligh wrote:
>
> On 1 Jul 2013, at 17:02, Gregory Farnum wrote:
>
>> It looks like probably your PG counts are too low for the number of
>> OSDs you have.
>> http://ceph.com/docs/master/rados/operations/placement-groups/
>
> The docs you referred Pierre to say:
On 1 Jul 2013, at 17:02, Gregory Farnum wrote:
> It looks like probably your PG counts are too low for the number of
> OSDs you have.
> http://ceph.com/docs/master/rados/operations/placement-groups/
The docs you referred Pierre to say:
"Important Increasing the number of placement groups in a p
On Mon, Jul 1, 2013 at 8:49 AM, Pierre BLONDEAU
wrote:
> Hy,
>
> I use the 0.64.1 version of CEPH on debian wheezy for serveur and ubuntu
> precise ( with raring kernel 3.8.0-25 ) as client.
>
> My problem is the distribution of data on the cluster. I have 3 servers each
> with 6 osd, but the dist
Hy,
I use the 0.64.1 version of CEPH on debian wheezy for serveur and ubuntu
precise ( with raring kernel 3.8.0-25 ) as client.
My problem is the distribution of data on the cluster. I have 3 servers
each with 6 osd, but the distribution is very heterogeneous :
86% /var/lib/ceph/osd/ceph-15
Hello
Are there any known problems with using rbd volume as swap? I would like to
create swap file on rbd volume with ext4 filesystem which has a parent rbd
volume:
# rbd info volume-f3e2ba2d-be66-4090-8acf-edcfbe87eada
rbd image 'volume-f3e2ba2d-be66-4090-8acf-edcfbe87eada':
size 8192 MB
On Mon, Jul 1, 2013 at 8:10 AM, Sage Weil wrote:
> On Sun, 30 Jun 2013, Andrey Korolyov wrote:
>> Recently I have an issue with OSD process with dying disk under it -
>> disk suddenly started doing cluster remapping so OSD was stale for a
>> couple of minutes. Unfortunately flapping prevention was
10 matches
Mail list logo