Hi to All,


I want to know what is the best configuration for the Journal devices, for example i got:


ceph-node01:


OSDs:


73GB (x10):


/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdf
/dev/sdg
/dev/sdh
/dev/sdi
/dev/sdj


500GB (x2):


/dev/sdk
/dev/sdl


Journals (x2):


73GB:


/dev/sdm
/dev/sdn




What would be the best configuration?


Option 1) - Use the whole disks without making a partition for each OSD:


   ceph-deploy osd prepare ceph-node01:sda:/dev/sdm
   ceph-deploy osd prepare ceph-node01:sdb:/dev/sdm
   ceph-deploy osd prepare ceph-node01:sdc:/dev/sdm
   ceph-deploy osd prepare ceph-node01:sdd:/dev/sdm
   ceph-deploy osd prepare ceph-node01:sde:/dev/sdm
   ceph-deploy osd prepare ceph-node01:sdk:/dev/sdm


   ceph-deploy osd prepare ceph-node01:sdf:/dev/sdn
   ceph-deploy osd prepare ceph-node01:sdg:/dev/sdn
    ceph-deploy osd prepare ceph-node01:sdh:/dev/sdn
   ceph-deploy osd prepare ceph-node01:sdi:/dev/sdn
   ceph-deploy osd prepare ceph-node01:sdj:/dev/sdn
   ceph-deploy osd prepare ceph-node01:sdl:/dev/sdn


Option 2) - create 6 partitions of 12GB on each Journal disk:



   ceph-deploy osd prepare ceph-node01:sda:/dev/sdm1
   ceph-deploy osd prepare ceph-node01:sdb:/dev/sdm2
   ceph-deploy osd prepare ceph-node01:sdc:/dev/sdm3
   ceph-deploy osd prepare ceph-node01:sdd:/dev/sdm4
   ceph-deploy osd prepare ceph-node01:sde:/dev/sdm5
   ceph-deploy osd prepare ceph-node01:sdk:/dev/sdm6


   ceph-deploy osd prepare ceph-node01:sdf:/dev/sdn1
   ceph-deploy osd prepare ceph-node01:sdg:/dev/sdn2
    ceph-deploy osd prepare ceph-node01:sdh:/dev/sdn3
   ceph-deploy osd prepare ceph-node01:sdi:/dev/sdn4
   ceph-deploy osd prepare ceph-node01:sdj:/dev/sdn5
   ceph-deploy osd prepare ceph-node01:sdl:/dev/sdn6


Option 3) - Any other ideas? or recommendations? Is better to partitioned the Journal with XFS, right?


Thanks in advance,


Best regards,



German Anders









_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to