Hi,
I'm running ceph version 15.2.16
(a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable), that would
mean I am not running the fix.
Glad to know that an upgrade will solve the issue!
Med vänliga hälsningar
Josef Johansson
On 8/16/23 12:05, Konstantin Shalygin wrote:
Hi,
of. I restarted all ceph-mgr beforehand
as well.
Med vänliga hälsningar
Josef Johansson
On 10/5/21 15:37, Konstantin Shalygin wrote:
As last resort we've change ipaddr of this host, and mon successfully joined to
quorum. When revert ipaddr back - mon can't join, we think there something on
Thu, Dec 8, 2022 at 11:15 AM Josef Johansson wrote:
>
> Hi,
>
> Running a simple
> `echo 1>a;sync;rm a;sync;fstrim --all`
> Triggers the problem. No need to have the mount point mounted with discard.
>
> On Thu, Dec 8, 2022 at 12:33 AM Josef Johansson wrote:
> >
&
mp; continue
[ $r -eq 0 ] && break
done
dd if=/dev/random of=/dev/nbd0 bs=4096 count=1024 oflag=sync &
sleep 0.1
On Wed, Dec 21, 2022 at 10:33 PM Josef Johansson wrote:
>
> That should obviously be
> unmap()
> {
> rbd-nbd unmap
> }
> trap unmap EXIT
>
&g
That should obviously be
unmap()
{
rbd-nbd unmap
}
trap unmap EXIT
On Wed, Dec 21, 2022 at 10:32 PM Josef Johansson wrote:
>
> Right, I actually ended up deadlocking rbd-nbd, that's why I switched
> over to rbd-replay.
> The flow was
>
> rbd-nbd map &
>
rman wrote:
>
> Thanks, i'll take a look at that. For reference, the deadlock we are seeing
> looks similar to the one described at the bottom of this issue:
> https://tracker.ceph.com/issues/52088
>
> thanks
> sam
>
> On Wed, Dec 21, 2022 at 4:04 PM Josef J
Hi,
I made some progress with my testing on a similat issue. Maybe the test
will be easy to adapt tonyour case.
https://tracker.ceph.com/issues/57396
What I can say though is that I don't see the deadlock problem in my
testing.
Cheers
-Josef
On Wed, 21 Dec 2022 at 22:00, Sam Perman wrote:
>
Hi,
Running a simple
`echo 1>a;sync;rm a;sync;fstrim --all`
Triggers the problem. No need to have the mount point mounted with discard.
On Thu, Dec 8, 2022 at 12:33 AM Josef Johansson wrote:
>
> Hi,
>
> I've updated https://tracker.ceph.com/issues/57396 with some more
&
nfigs, one works the other doesn't.
Not sure what to make of it, seems that the kernel around 4.18+ are
sending a weird discard?
On Tue, Aug 30, 2022 at 8:43 AM Josef Johansson wrote:
>
> Hi,
>
> There's nothing special in the cluster when it stops replaying. It
> seems
Hi,
No, you must stop the image on the primary site (A) and make the image on
the non primary site (B) primary. It's possible to clone a snapshot though.
See
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/P6BHPUZEMSCK4NJY5BZSYOB5XBWVT424/
https://lists.ceph.io/hyperkitty/list/c
Hi,
You want to also check disk_io_weighted via some kind of metric system.
That will detect which SSDs that are hogging the systems, if there are any
specific ones. Also check their error levels and endurance.
On Fri, 7 Oct 2022 at 17:05, Stefan Kooman wrote:
> On 10/7/22 16:56, Tino Todino wr
Hi,
I've added as much logging as I can, still shows nothing.
On Fri, 16 Sep 2022 at 21:35, Arthur Outhenin-Chalandre <
arthur.outhenin-chalan...@cern.ch> wrote:
> Hi Josef,
>
> > On 16/09/2022 14:15 Josef Johansson wrote:
> > Are you guys affected by
> > ht
Hi,
Are you guys affected by
https://tracker.ceph.com/issues/57396 ?
On Fri, 16 Sep 2022 at 09:40, ronny.lippold wrote:
> hi and thanks a lot.
> good to stay not alone and understand some right :)
>
> i will also tell, if there is something new.
>
>
> so from my point of view, the only consista
ng useful (maybe also in debug mode)?
>
> Regards,
> Eugen
>
> Zitat von Josef Johansson :
>
> > Hi,
> >
> > I'm running ceph octopus 15.2.16 and I'm trying out two way mirroring.
> >
> > Everything seems to running fine except sometimes when
On Mon, Aug 1, 2022 at 2:33 PM Josef Johansson wrote:
>
>
>
>
> Forwarded Message
> Subject:rbd-mirror stops replaying journal on primary cluster
> Date: Thu, 2 Jun 2022 16:29:33 +0200 (CEST)
> From: Josef Johansson
> To: ceph-users
15 matches
Mail list logo