Ceph has per-pg and per-OSD metadata overhead. You currently have 26000
PGs, suitable for use on a cluster of the order of 260 OSDs. You have
placed almost 7GB of data into it (21GB replicated) and have about 7GB of
additional overhead.

You might try putting a suitable amount of data into the cluster before
worrying about the ratio of space used to data stored. :)
-Greg
On Fri, Mar 27, 2015 at 3:26 AM Saverio Proto <ziopr...@gmail.com> wrote:

> > I will start now to push a lot of data into the cluster to see if the
> > "metadata" grows a lot or stays costant.
> >
> > There is a way to clean up old metadata ?
>
> I pushed a lot of more data to the cluster. Then I lead the cluster
> sleep for the night.
>
> This morning I find this values:
>
> 6841 MB data
> 25814 MB used
>
> that is a bit more of 1 to 3.
>
> It looks like the extra space is in these folders (for N from 1 to 36):
>
> /var/lib/ceph/osd/ceph-N/current/meta/
>
> This "meta" folders have a lot of data in it. I would really be happy
> to have pointers to understand what is in there and how to clean that
> up eventually.
>
> The problem is that googling for "ceph meta" or "ceph metadata" will
> produce results for Ceph MDS that is completely unrelated :(
>
> thanks
>
> Saverio
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to