You've either modified the crushmap, or changed the pool size to 1.  The
defaults create 3 replicas on different hosts.

What does `ceph osd dump | grep ^pool` output?  If the size param is 1,
then you reduced the replica count.  If the size param is > 1, you must've
adjusted the crushmap.

Either way, after you add the second node would be the ideal time to change
that back to the default.


Given that you only have 40GB of data in the cluster, you shouldn't have a
problem adding the 2nd node.


On Fri, Jan 23, 2015 at 3:58 PM, Georgios Dimitrakakis <gior...@acmac.uoc.gr
> wrote:

> Hi Craig!
>
>
> For the moment I have only one node with 10 OSDs.
> I want to add a second one with 10 more OSDs.
>
> Each OSD in every node is a 4TB SATA drive. No SSD disks!
>
> The data ara approximately 40GB and I will do my best to have zero
> or at least very very low load during the expansion process.
>
> To be honest I haven't touched the crushmap. I wasn't aware that I
> should have changed it. Therefore, it still is with the default one.
> Is that OK? Where can I read about the host level replication in CRUSH map
> in order
> to make sure that it's applied or how can I find if this is already
> enabled?
>
> Any other things that I should be aware of?
>
> All the best,
>
>
> George
>
>
>  It depends.  There are a lot of variables, like how many nodes and
>> disks you currently have.  Are you using journals on SSD.  How much
>> data is already in the cluster.  What the client load is on the
>> cluster.
>>
>> Since you only have 40 GB in the cluster, it shouldnt take long to
>> backfill.  You may find that it finishes backfilling faster than you
>> can format the new disks.
>>
>> Since you only have a single OSD node, you mustve changed the crushmap
>> to allow replication over OSDs instead of hosts.  After you get the
>> new node in would be the best time to switch back to host level
>> replication.  The more data you have, the more painful that change
>> will become.
>>
>> On Sun, Jan 18, 2015 at 10:09 AM, Georgios Dimitrakakis  wrote:
>>
>>  Hi Jiri,
>>>
>>> thanks for the feedback.
>>>
>>> My main concern is if its better to add each OSD one-by-one and
>>> wait for the cluster to rebalance every time or do it all-together
>>> at once.
>>>
>>> Furthermore an estimate of the time to rebalance would be great!
>>>
>>> Regards,
>>>
>>
>>
>> Links:
>> ------
>> [1] mailto:gior...@acmac.uoc.gr
>>
>
> --
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to