On Wednesday, May 7, 2014 at 20:28, *sm1Ly wrote:
>
> [sm1ly@salt1 ceph]$ sudo ceph -s
> cluster 0b2c9c20-985a-4a39-af8e-ef2325234744
> health HEALTH_WARN 19 pgs degraded; 192 pgs stuck unclean; recovery
> 21/42 objects degraded (50.000%); too few pgs per osd (16 < min 20)
>
You might
On 2014.05.07 20:28, *sm1Ly wrote:
> I got deploy my cluster with this commans.
>
> mkdir "clustername"
>
> cd "clustername"
>
> ceph-deploy install mon1 mon2 mon3 mds1 mds2 mds3 osd200
>
> ceph-deploy new mon1 mon2 mon3
>
> ceph-deploy mon create mon1 mon2 mon3
>
> ceph-deploy gatherk
I got deploy my cluster with this commans.
mkdir "clustername"
cd "clustername"
ceph-deploy install mon1 mon2 mon3 mds1 mds2 mds3 osd200
ceph-deploy new mon1 mon2 mon3
ceph-deploy mon create mon1 mon2 mon3
ceph-deploy gatherkeys mon1 mon2 mon3
ceph-deploy osd prepare --fs-type ext4 osd20