rados approved
On Thu, Aug 24, 2023 at 12:33 AM Laura Flores wrote:
> Rados summary is here:
> https://tracker.ceph.com/projects/rados/wiki/PACIFIC#Pacific-v16214-httpstrackercephcomissues62527note-1
>
> Most are known, except for two new trackers I raised:
>
>1. https://tracker.ceph.com/iss
Hello guys,
We are facing/seeing an unexpected mark in one of our pools. Do you guys
know what does "removed_snaps_queue" it mean? We see some notation such as
"d5~3" after this tag. What does it mean? We tried to look into the docs,
but could not find anything meaningful.
We are running Ceph Octo
> Thank you for reply,
>
> I have created two class SSD and NvME and assigned them to crush maps.
You don't have enough drives to keep them separate. Set the NVMe drives back
to "ssd" and just make one pool.
>
> $ ceph osd crush rule ls
> replicated_rule
> ssd_pool
> nvme_pool
>
>
> Runni
Hi,
one thing coming to mind is maybe the device names have changed from
/dev/sdX to /dev/sdY? Something like that has been reported a couple
of times in the last months.
Zitat von Alison Peisker :
Hi all,
We rebooted all the nodes in our 17.2.5 cluster after performing
kernel updates,
Hi all,
We rebooted all the nodes in our 17.2.5 cluster after performing kernel
updates, but 2 of the OSDs on different nodes are not coming back up. This is a
production cluster using cephadm.
The error message from the OSD log is ceph-osd[87340]: ** ERROR: unable to
open OSD superblock on /
Thank you for reply,
I have created two class SSD and NvME and assigned them to crush maps.
$ ceph osd crush rule ls
replicated_rule
ssd_pool
nvme_pool
Running benchmarks on nvme is the worst performing. SSD showing much better
results compared to NvME. NvME model is Samsung_SSD_980_PRO_1TB
##
On Fri, Aug 25, 2023 at 5:26 PM Laura Flores wrote:
>
> All known issues in pacific p2p and smoke. @Ilya Dryomov
> and @Casey Bodley may want to
> double-check that the two for pacific p2p are acceptable, but they are
> known.
>
> pacific p2p:
> - TestClsRbd.mirror_snapshot failure in pacific p2
Josh, Neha
We are looking good but missing some clarifications (see Laura's replies
below)
If you approve 16.2.14 as is we can start building today.
PLMK
On Fri, Aug 25, 2023 at 8:37 AM Yuri Weinstein wrote:
> Thx Laura
>
> On the issue related to the smoke suite, pls see
> https://tracker.ce
Thx Laura
On the issue related to the smoke suite, pls see
https://tracker.ceph.com/issues/62508
@Venky Shankar ^
On Fri, Aug 25, 2023 at 8:25 AM Laura Flores wrote:
> All known issues in pacific p2p and smoke. @Ilya Dryomov
> and @Casey Bodley may want to
> double-check that the two for pac
All known issues in pacific p2p and smoke. @Ilya Dryomov
and @Casey Bodley may want to
double-check that the two for pacific p2p are acceptable, but they are
known.
pacific p2p:
- TestClsRbd.mirror_snapshot failure in pacific p2p - Ceph - RBD
https://tracker.ceph.com/issues/62586
- "[ FAILED
Wanted to bring this tracker to attention:
https://tracker.ceph.com/issues/58946
Users have recently reported experiencing this bug in v16.2.13. There is a
main fix available, but it's currently undergoing testing. @Adam King
what are your thoughts on getting this fix into 16.2.14?
On Fri, Aug 2
Approved for rook.
For future approvals, Blaine or I could approve, as Seb is on another
project now.
Thanks,
Travis
On Fri, Aug 25, 2023 at 7:06 AM Venky Shankar wrote:
> On Fri, Aug 25, 2023 at 7:17 AM Patrick Donnelly
> wrote:
> >
> > On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein
> wrot
On Fri, Aug 25, 2023 at 7:17 AM Patrick Donnelly wrote:
>
> On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/62527#note-1
> > Release Notes - TBD
> >
> > Seeking approvals for:
> >
> > smoke - Venky
On a Pacific cluster I have the same error message:
---snip---
023-08-25T11:56:47.222407+02:00 ses7-host1 conmon[1383161]: debug
(LUN.add_dev_to_lio) Adding image 'iscsi-pool/image3' to LIO backstore
rbd
2023-08-25T11:56:47.714375+02:00 ses7-host1 kernel: [12861743.121824]
rbd: rbd1: capaci
Hi,
that's quite interesting, I tried to reproduce with 18.2.0 but it
worked for me. The cluster runs on openSUSE Leap 15.4. There are two
things that seem to differ in my attempt.
1. I had to run 'modprobe iscsi_target_mod' to get rid of this error message:
(b'{"message":"iscsi target \'in
Hi,
according to [1] the error 125 means there was a race condition:
failed to sync bucket instance: (125) Operation canceled
A racing condition exists between writes to the same RADOS object.
Can you rewrite just the affected object? Not sure about the other
error, maybe try rewriting that
Hi,
I'm still not sure if we're on the same page.
By looking at
https://docs.ceph.com/en/latest/man/8/ceph-volume/#cmdoption-ceph-volume-lvm-prepare-block.db it seems that ceph-volume wants an LV or partition. So it's apparently not just taking a VG itself? Also if there were multiple VGs / de
17 matches
Mail list logo