On 3/23/19 12:20 AM, Mazzystr wrote:
inline...

On Fri, Mar 22, 2019 at 1:08 PM Konstantin Shalygin <k0...@k0ste.ru <mailto:k0...@k0ste.ru>> wrote:

    On 3/22/19 11:57 PM, Mazzystr wrote:
    > I am also seeing BlueFS spill since updating to Nautilus.  I
    also see
    > high slow_used_bytes and slow_total_bytes metrics.  It sure
    looks to
    > me that the only solution is to zap and rebuilt the osd.  I had to
    > manually check 36 osds some of them traditional processes and some
    > containerized.  The lack of tooling here is underwhelming...  As
    soon
    > as I rebuilt the osd the "BlueFS spill..." warning went away.
    >
    > I use 50Gb db partitions on an nvme with 3 or 6 Tb spinning
    disks.  I
    > don't understand the spillove

    Wow, it's something new. What is your upgrade path?


I keep current with community.  All osds have all been rebuilt as of luminous.

    Also, you record cluster metrics, like via prometheus? To see diff
    between upgrades.

Unfortunately not.  I've only had prometheus running for about two weeks aaaand I had it turned off for a couple days for some unknown reason... :/

This is sad. Because it's was be good to see the nature of metrics on graph.



k

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to