> > Hi Ilya,
> >
> > hmm, OK, I'm not sure now whether this is the bug which I'm
> > experiencing.. I've had read_partial_message / bad crc/signature
> > problem occurance on the second cluster in short period even though
> > we're on the same ceph version (12.2.5) for quite long time (almost sin
On Mon, Aug 13, 2018 at 5:57 PM Nikola Ciprich
wrote:
>
> Hi Ilya,
>
> hmm, OK, I'm not sure now whether this is the bug which I'm
> experiencing.. I've had read_partial_message / bad crc/signature
> problem occurance on the second cluster in short period even though
> we're on the same ceph ver
Hi Ilya,
hmm, OK, I'm not sure now whether this is the bug which I'm
experiencing.. I've had read_partial_message / bad crc/signature
problem occurance on the second cluster in short period even though
we're on the same ceph version (12.2.5) for quite long time (almost since
its release), so it'
On Mon, Aug 13, 2018 at 2:49 PM Nikola Ciprich
wrote:
>
> Hi Paul,
>
> thanks, I'll give it a try.. do you think this might head to
> upstream soon? for some reason I can't review comments for
> this patch on github.. Is some new version of this patch
> on the way, or can I try to apply this one
Hi Paul,
thanks, I'll give it a try.. do you think this might head to
upstream soon? for some reason I can't review comments for
this patch on github.. Is some new version of this patch
on the way, or can I try to apply this one to latest luminous?
thanks a lot!
nik
On Fri, Aug 10, 2018 at 06
I've built a work-around here:
https://github.com/ceph/ceph/pull/23273
Paul
2018-08-10 12:51 GMT+02:00 Nikola Ciprich :
> Hi,
>
> did this ever come to some conclusion? I've recently started seeing
> those messages on one luminous cluster and am not sure whethere
> those are dangerous or not..
On Thu, Oct 5, 2017 at 6:05 PM, Olivier Bonvalet wrote:
> Le jeudi 05 octobre 2017 à 17:03 +0200, Ilya Dryomov a écrit :
>> When did you start seeing these errors? Can you correlate that to
>> a ceph or kernel upgrade? If not, and if you don't see other issues,
>> I'd write it off as faulty hard
On Thu, Oct 5, 2017 at 12:01 PM, Olivier Bonvalet wrote:
> Le jeudi 05 octobre 2017 à 11:47 +0200, Ilya Dryomov a écrit :
>> The stable pages bug manifests as multiple sporadic connection
>> resets,
>> because in that case CRCs computed by the kernel don't always match
>> the
>> data that gets sen
m
Cc: ceph-users ; Josy
Subject: Re: [ceph-users] bad crc/signature errors
Perhaps this is related to a known issue on some 4.4 and later kernels [1]
where the stable write flag was not preserved by the kernel?
[1] http://tracker.ceph.com/issues/19275
The stable pages bug manifests as multiple spo
eph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Jason Dillaman
>> Sent: Thursday, 5 October 2017 5:45 AM
>> To: Gregory Farnum
>> Cc: ceph-users ; Josy
>>
>> Subject: Re: [ceph-users] bad crc/signature errors
>>
>> Perhaps this is related
On Thu, Oct 5, 2017 at 9:03 AM, Olivier Bonvalet wrote:
> I also see that, but on 4.9.52 and 4.13.3 kernel.
>
> I also have some kernel panic, but don't know if it's related (RBD are
> mapped on Xen hosts).
Do you have that panic message?
Do you use rbd devices for something other than Xen? If
Thursday, 5 October 2017 5:45 AM
> To: Gregory Farnum
> Cc: ceph-users ; Josy
>
> Subject: Re: [ceph-users] bad crc/signature errors
>
> Perhaps this is related to a known issue on some 4.4 and later kernels [1]
> where the stable write flag was not preserved by the kernel?
>
Perhaps this is related to a known issue on some 4.4 and later kernels
[1] where the stable write flag was not preserved by the kernel?
[1] http://tracker.ceph.com/issues/19275
On Wed, Oct 4, 2017 at 2:36 PM, Gregory Farnum wrote:
> That message indicates that the checksums of messages between y
That message indicates that the checksums of messages between your kernel
client and OSD are incorrect. It could be actual physical transmission
errors, but if you don't see other issues then this isn't fatal; they can
recover from it.
On Wed, Oct 4, 2017 at 8:52 AM Josy wrote:
> Hi,
>
> We have
Hi,
We have setup a cluster with 8 OSD servers (31 disks)
Ceph health is Ok.
--
[root@las1-1-44 ~]# ceph -s
cluster:
id: de296604-d85c-46ab-a3af-add3367f0e6d
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-las-mon-a1,ceph-las-mon-a2,ceph-las-mon-a3
mg
15 matches
Mail list logo