Hi Eric,

increase the number of PGs in your pool with 
Step 1: ceph osd pool set <poolname> pg_num <newvalue> 
Step 2: ceph osd pool set <poolname> pgp_num <newvalue> 

You can check the number of PGs in your pool with ceph osd dump | grep ^pool

See documentation: http://ceph.com/docs/master/rados/operations/pools/

JC



On Jun 11, 2014, at 12:59, Eric Eastman <eri...@aol.com> wrote:

> Hi,
> 
> I am seeing the following warning on one of my test clusters:
> 
> # ceph health detail
> HEALTH_WARN pool Ray has too few pgs
> pool Ray objects per pg (24) is more than 12 times cluster average (2)
> 
> This is a reported issue and is set to "Won't Fix" at:
> http://tracker.ceph.com/issues/8103
> 
> My test cluster has a mix of test data, and the pool showing the warning is 
> used for RBD Images.
> 
> 
> # ceph df detail
> GLOBAL:
>   SIZE      AVAIL     RAW USED     %RAW USED     OBJECTS
>   1009G     513G      496G         49.14         33396
> POOLS:
>    NAME                   ID     CATEGORY     USED       %USED     OBJECTS    
>  DIRTY     READ       WRITE
>    data                   0      -            0          0         0          
> 0         0          0
>    metadata               1      -            0          0         0          
> 0         0          0
>    rbd                    2      -            0          0         0          
> 0         0          0
>    iscsi                  3      -            847M       0.08      241        
> 211       11839k     10655k
>    cinder                 4      -            305M       0.03      53         
> 2         51579      31584
>    glance                 5      -            65653M     6.35      8222       
>  7         512k       10405
>    .users.swift           7      -            0          0         0          
> 0         0          4
>    .rgw.root              8      -            1045       0         4          
> 4         23         5
>    .rgw.control           9      -            0          0         8          
> 8         0          0
>    .rgw                   10     -            252        0         2          
> 2         3          11
>    .rgw.gc                11     -            0          0         32         
> 32        4958       3328
>    .users.uid             12     -            575        0         3          
> 3         70         23
>    .users                 13     -            9          0         1          
> 1         0          9
>    .users.email           14     -            0          0         0          
> 0         0          0
>    .rgw.buckets           15     -            0          0         0          
> 0         0          0
>    .rgw.buckets.index     16     -            0          0         1          
> 1         1          1
>    Ray                    17     -            99290M     9.61      24829      
>  24829     0          0
> 
> 
> It would be nice if we could turn off this message.
> 
> Eric
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to