Hi.

I am just curious. This is just lab environment and we are short on hardware :). We will have more hardware later, but right now this is all I have. Monitors are VMs.

Anyway, we will have to survive with this somehow :).

Thanks
Jiri

On 20/01/2015 15:33, Lindsay Mathieson wrote:


On 20 January 2015 at 14:10, Jiri Kanicky <j...@ganomi.com <mailto:j...@ganomi.com>> wrote:

    Hi,

    BTW, is there a way how to achieve redundancy over multiple OSDs
    in one box by changing CRUSH map?



I asked that same question myself a few weeks back :)

The answer was yes - but fiddly and why would you do that?

Its kinda breaking the purpose of ceph, which is large amounts of data stored redundantly over multiple nodes.

Perhaps you should re-examine your requirements. If what you want is data redundantly stored on hard disks on one node, perhaps you would be better served by creating a ZFS raid setup. With just one node it would be easier and more flexible - better performance as well.

Alternatively, could you put some OSD's on your monitor ndoes? what spec are they?



    Thank you
    Jiri


    On 20/01/2015 13:37, Jiri Kanicky wrote:
    Hi,

    Thanks for the reply. That clarifies it. I thought that the
    redundancy can be achieved with multiple OSDs (like multiple
    disks in RAID) in case you don't have more nodes. Obviously the
    single point of failure would be the box.

    My current setting is:
    osd_pool_default_size = 2

    Thank you
    Jiri


    On 20/01/2015 13:13, Lindsay Mathieson wrote:
    You only have one osd node (ceph4). The default replication
    requirements  for your pools (size = 3) require osd's spread
    over three nodes, so the data can be replicate on three
    different nodes. That will be why your pgs are degraded.

    You need to either add mode osd nodes or reduce your size
    setting down to the number of osd nodes you have.

    Setting your size to 1 would be a bad idea, there would be no
    redundancy in your data at all. Loosing one disk would destroy
    all your data.

    The command to see you pool size is:

    sudo ceph osd pool get <poolname> size

    assuming default setup:

    ceph osd pool  get rbd size
    returns: 3

    On 20 January 2015 at 10:51, Jiri Kanicky <j...@ganomi.com
    <mailto:j...@ganomi.com>> wrote:

        Hi,

        I just would like to clarify if I should expect degraded PGs
        with 11 OSD in one node. I am not sure if a setup with 3 MON
        and 1 OSD (11 disks) nodes allows me to have healthy cluster.

        $ sudo ceph osd pool create test 512
        pool 'test' created

        $ sudo ceph status
            cluster 4e77327a-118d-450d-ab69-455df6458cd4
             health HEALTH_WARN 512 pgs degraded; 512 pgs stuck
        unclean; 512 pgs undersized
             monmap e1: 3 mons at
        
{ceph1=172.16.41.31:6789/0,ceph2=172.16.41.32:6789/0,ceph3=172.16.41.33:6789/0
        
<http://172.16.41.31:6789/0,ceph2=172.16.41.32:6789/0,ceph3=172.16.41.33:6789/0>},
        election epoch 36, quorum 0,1,2 ceph1,ceph2,ceph3
             osdmap e190: 11 osds: 11 up, 11 in
              pgmap v342: 512 pgs, 1 pools, 0 bytes data, 0 objects
                    53724 kB used, 9709 GB / 9720 GB avail
                         512 active+undersized+degraded

        $ sudo ceph osd tree
        # id    weight  type name       up/down reweight
        -1      9.45    root default
        -2      9.45            host ceph4
        0       0.45                    osd.0  up      1
        1       0.9                     osd.1  up      1
        2       0.9                     osd.2  up      1
        3       0.9                     osd.3  up      1
        4       0.9                     osd.4  up      1
        5       0.9                     osd.5  up      1
        6       0.9                     osd.6  up      1
        7       0.9                     osd.7  up      1
        8       0.9                     osd.8  up      1
        9       0.9                     osd.9  up      1
        10      0.9                     osd.10 up      1


        Thank you,
        Jiri
        _______________________________________________
        ceph-users mailing list
        ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- Lindsay





--
Lindsay

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to