Hi all,
As of my knowledge, bluestore has label which can be shown by
"ceph-bluestore-tool show-label --dev /dev/ol/osd0"
{
"/dev/ol/osd0": {
"osd_uuid": "e8a4422f-4f01-4d7a-9b3c-65ff25101d8b",
"size": 5368709120,
"btime": "2022-12-22T13:29:52.351106+0800",
"d
This was enough.
root 156091 0.2 0.2 619356 100816 ? S+ 22:39 0:00
rbd-nbd --device /dev/nbd0 map deadlock
root 156103 0.1 0.2 1463720 89992 ? Dl+ 22:39 0:00
rbd-nbd --device /dev/nbd0 map deadlock
root 156121 0.3 0.2 116528 97020 ?D+ 22:39 0:00
None of the ceph deploy methods seem to support Rocky 9.
What is the official word? Is it worth spending time to get it working or just
fall back to the supported platforms shown on the ceph page.
OS Recommendations — Ceph Documentation
Thanks,Fred.
__
That should obviously be
unmap()
{
rbd-nbd unmap
}
trap unmap EXIT
On Wed, Dec 21, 2022 at 10:32 PM Josef Johansson wrote:
>
> Right, I actually ended up deadlocking rbd-nbd, that's why I switched
> over to rbd-replay.
> The flow was
>
> rbd-nbd map &
> unmap()
> {
> rbd-nbd unmap
> }
> while
Right, I actually ended up deadlocking rbd-nbd, that's why I switched
over to rbd-replay.
The flow was
rbd-nbd map &
unmap()
{
rbd-nbd unmap
}
while true; do
lsblk --noempty /dev/nbd0
r=$?
[ $r -eq 32 ] && continue
[ $r -eq 0 ] && break
done
dd if=/dev/random of=/dev/nbd0 bs=4096 count=1
Thanks, i'll take a look at that. For reference, the deadlock we are
seeing looks similar to the one described at the bottom of this issue:
https://tracker.ceph.com/issues/52088
thanks
sam
On Wed, Dec 21, 2022 at 4:04 PM Josef Johansson wrote:
> Hi,
>
> I made some progress with my testing on
Hi,
I made some progress with my testing on a similat issue. Maybe the test
will be easy to adapt tonyour case.
https://tracker.ceph.com/issues/57396
What I can say though is that I don't see the deadlock problem in my
testing.
Cheers
-Josef
On Wed, 21 Dec 2022 at 22:00, Sam Perman wrote:
>
Hello!
I'm trying to chase down a deadlock we occasionally see on the client side
when using rbd-nbd and have a question about a lingering process we are
seeing.
I have a simple test script that will execute the following in order:
* use rbd to create a new image
* use rbd-nbd to map the image l
Hi Eugen,
thanks! I think this explains our observation.
Thanks and merry Christmas!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: 21 December 2022 14:03:06
To: ceph-users@ceph.io
Subject: [ceph-users] Re:
Hey all!
I am trying to enable the mgr module for the diskprediction_local to start
getting predictive failure messages for drives in our Ceph Cluster.
I am following ===> https://docs.ceph.com/en/octopus/mgr/diskprediction/
In our Ubuntu 20.04 deployed Octopus Cluster (Running Ceph 15.2.17), I
Hi Frank,
I asked the same question 4 years ago [1]. Basically, Gregs reponse was:
So, this is actually just noisy logging from the client processing
an OSDMap. That should probably be turned down, as it's not really
an indicator of...anything...as far as I can tell.
IIRC clients sometimes
Hi all,
on ceph fs kernel clients we see a lot of these kind of messages in bursts:
...
[Mon Dec 19 09:43:15 2022] libceph: osd1258 weight 0x1 (in)
[Mon Dec 19 09:43:15 2022] libceph: osd1258 up
[Mon Dec 19 09:43:15 2022] libceph: osd1259 weight 0x1 (in)
[Mon Dec 19 09:43:15 2022] libceph
I tried to install other setting. I install ceph quincy by manual install
method(apt package install).
It works fine with config which is ms_type= async+posix. But it doesn't
work with RDMA setting. However, I got error logs.
#Case1: ms_type=aysnc+rdma in /etc/ceph/ceph.conf
Client error has occur
13 matches
Mail list logo