Hi,
Two days ago I added a new osd to one of my ceph machines, because one
of the existing osd's got rather full. There was quite a difference in
disk space usage between osd's, but I understand this is kind of just
how ceph works. It spreads data over osd's but not perfectly even.
Now check out
Hi all,
I want to know if someone has deploy some new relic (pyhon) plugin for
Ceph.
Thanks a lot,
Best regards,
*Ger*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Sorry for my late answer.
Gregory Farnum wrote:
>> 1. Is this kind of freeze normal? Can I avoid these freezes with a
>> more recent version of the kernel in the client?
>
> Yes, it's normal. Although you should have been able to do a lazy
> and/or force umount. :)
Ah, I haven't tried it.
John Spray wrote:
> Greg's response is pretty comprehensive, but for completeness I'll add that
> the specific case of shutdown blocking is http://tracker.ceph.com/issues/9477
Yes indeed, during the freeze, "INFO: task sync:3132 blocked for more than 120
seconds..."
was exactly the message I ha
Hi,
Wido den Hollander wrote:
> Aren't snapshots something that should protect you against removal? IF
> snapshots work properly in CephFS you could create a snapshot every hour.
Are you talking about the .snap/ directory in a cephfs directory?
If yes, does it work well? Because, with Hammer, i
Hello everyone. I’m having an interesting thing happening to me.
I have a PG that has been doing a deep scrub for 3 days.
Other PGs start scrubbing and finish within a minute or two, but this PG just
will not finish scrubbing at all. Any ideas as to how I can kick the scrub or
nudge it into fin