>
> I'm just trying to debug a situation which filled my cluster/osds tonight.
>
> We are currently running a small testcluster:
>
> 3 mon's
> 2 mds (active + standby)
> 2 nodes = 2x12x410G HDD/OSDs
>
> A user created a 500G rbd-volume. First I thought the 500G rbd may have
> caused the osd to fill, but after reading your explainnations this seems
> impossible.
> I just found another 500G file created by this user in cephfs, may this
> have caused the trouble?
>
> What is the current issue? Cluster near-full? cluster too-full? Can you
send the output of ceph -s?

If this is the case you can look at the output of ceph df detail to figure
out which pool is using the disk space. How many PGs these pools have? can
you send the output of ceph df detail and ceph osd dump | grep pool?
Is there anything else on these nodes taking up disk space? Like the
journals...

With that setup (and 3x replication) you should be able to store around
1-1.2T without any warnings, but that will depend on PG distribution which
is hard to predict...


> Thanks a lot for your fast support!
>
> Fabian
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to