Thanks Greg,
Yes, I'm using CephFS and RGW (mainly CephFS)
The files are still accessible and users doesn't report any problem.
Here is the output of ceph -s

ceph -s
  cluster:
    id:     <cluster-id>
    health: HEALTH_OK

  services:
    mon: 5 daemons, quorum
ceph-mon01,ceph-mon02,ceph-mon03,ceph-mon04,ceph-mon05
    mgr: ceph-mon04(active), standbys: ceph-mon02, ceph-mon05, ceph-mon03,
ceph-mon01
    mds: cephfs01-1/1/1 up  {0=ceph-mds03=up:active}, 3 up:standby
    osd: 4 osds: 4 up, 4 in
    rgw: 4 daemons active

  data:
    pools:   15 pools, 224 pgs
    objects: 1.54M objects, 4.01TiB
    usage:   8.03TiB used, 64.7TiB / 72.8TiB avail
    pgs:     224 active+clean

ceph versions
{
    "mon": {
        "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 5
    },
    "mgr": {
        "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 5
    },
    "osd": {
        "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 4
    },
    "mds": {
        "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 1
    },
    "rgw": {
        "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 4
    },
    "overall": {
        "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 19
    }
}

Thanks for looking into it.
Marco

Il giorno gio 6 dic 2018 alle ore 23:18 Gregory Farnum <gfar...@redhat.com>
ha scritto:

> Well, it looks like you have different data in the MDSMap across your
> monitors. That's not good on its face, but maybe there are extenuating
> circumstances. Do you actually use CephFS, or just RBD/RGW? What's the
> full output of "ceph -s"?
> -Greg
>
> On Thu, Dec 6, 2018 at 1:39 PM Marco Aroldi <marco.aro...@gmail.com>
> wrote:
> >
> > Sorry about this, I hate "to bump" a thread, but...
> > Anyone has faced this situation?
> > There is a procedure to follow?
> >
> > Thanks
> > Marco
> >
> > Il giorno gio 8 nov 2018, 10:54 Marco Aroldi <marco.aro...@gmail.com>
> ha scritto:
> >>
> >> Hello,
> >> Since upgrade from Jewel to Luminous 12.2.8, in the logs are reported
> some errors related to "scrub mismatch", every day at the same time.
> >> I have 5 mon (from mon.0 to mon.4) and I need help to indentify and
> recover from this problem.
> >>
> >> This is the log:
> >> 2018-11-07 15:13:53.808128 [ERR]  mon.4 ScrubResult(keys
> {logm=46,mds_health=29,mds_metadata=1,mdsmap=24} crc
> {logm=1239992787,mds_health=3182263811,mds_metadata=3704185590,mdsmap=1114086003})
> >> 2018-11-07 15:13:53.808095 [ERR]  mon.0 ScrubResult(keys
> {logm=46,mds_health=30,mds_metadata=1,mdsmap=23} crc
> {logm=1239992787,mds_health=1194056063,mds_metadata=3704185590,mdsmap=3259702002})
> >> 2018-11-07 15:13:53.808061 [ERR]  scrub mismatch
> >> 2018-11-07 15:13:53.808026 [ERR]  mon.3 ScrubResult(keys
> {logm=46,mds_health=31,mds_metadata=1,mdsmap=22} crc
> {logm=1239992787,mds_health=807938287,mds_metadata=3704185590,mdsmap=662277977})
> >> 2018-11-07 15:13:53.807970 [ERR]  mon.0 ScrubResult(keys
> {logm=46,mds_health=30,mds_metadata=1,mdsmap=23} crc
> {logm=1239992787,mds_health=1194056063,mds_metadata=3704185590,mdsmap=3259702002})
> >> 2018-11-07 15:13:53.807939 [ERR]  scrub mismatch
> >> 2018-11-07 15:13:53.807916 [ERR]  mon.2 ScrubResult(keys
> {logm=46,mds_health=31,mds_metadata=1,mdsmap=22} crc
> {logm=1239992787,mds_health=807938287,mds_metadata=3704185590,mdsmap=662277977})
> >> 2018-11-07 15:13:53.807882 [ERR]  mon.0 ScrubResult(keys
> {logm=46,mds_health=30,mds_metadata=1,mdsmap=23} crc
> {logm=1239992787,mds_health=1194056063,mds_metadata=3704185590,mdsmap=3259702002})
> >> 2018-11-07 15:13:53.807844 [ERR]  scrub mismatch
> >>
> >> Any help will be appreciated
> >> Thanks
> >> Marco
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to