On Tue, May 24, 2022 at 5:20 PM Sopena Ballesteros Manuel
wrote:
>
> Hi Ilya,
>
>
> thank you very much for your prompt response,
>
>
> Any rbd command variation is affected (mapping device included)
>
> We are using a physical machine (no container involved)
>
>
> Below is the output of the runni
On Tue, May 24, 2022 at 8:14 PM Sopena Ballesteros Manuel
wrote:
>
> yes dmesg shows the following:
>
> ...
>
> [23661.367449] rbd: rbd12: failed to lock header: -13
> [23661.367968] rbd: rbd2: no lock owners detected
> [23661.369306] rbd: rbd11: no lock owners detected
> [23661.370068] rbd: rbd11
On Wed, May 25, 2022 at 9:21 AM Sopena Ballesteros Manuel
wrote:
>
> attached,
>
>
> nid001388:~ # ceph auth get client.noir
> 2022-05-25T09:20:00.731+0200 7f81f63f3700 -1 auth: unable to find a keyring
> on
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph
On Tue, Jun 14, 2022 at 7:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky,
On Wed, Jun 15, 2022 at 3:21 PM Frank Schilder wrote:
>
> Hi Eugen,
>
> in essence I would like the property "thick provisioned" to be sticky after
> creation and apply to any other operation that would be affected.
>
> To answer the use-case question: this is a disk image on a pool designed for
On Sun, Jun 19, 2022 at 6:13 PM Yuri Weinstein wrote:
>
> rados, rgw, rbd and fs suits ran on the latest sha1
> (https://shaman.ceph.com/builds/ceph/quincy-release/eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2/)
>
> pls see the summary:
> https://tracker.ceph.com/issues/55974#note-1
>
> seeking final
On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote:
>
> Hi,
>
>
> we noticed that some of our long running VMs (1 year without migration) seem
> to have a very slow memory leak. Taking a dump of the leaked memory revealed
> that it seemed to contain osd and pool information so we concluded that
On Wed, Jun 22, 2022 at 11:14 AM Peter Lieven wrote:
>
>
>
> Von meinem iPhone gesendet
>
> > Am 22.06.2022 um 10:35 schrieb Ilya Dryomov :
> >
> > On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote:
> >>
> >> Hi,
> >>
> >&
On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote:
>
> Am 22.06.22 um 15:46 schrieb Josh Baergen:
> > Hey Peter,
> >
> >> I found relatively large allocations in the qemu smaps and checked the
> >> contents. It contained several hundred repetitions of osd and pool names.
> >> We use the defaul
On Fri, Jul 1, 2022 at 8:32 AM Ansgar Jazdzewski
wrote:
>
> Hi folks,
>
> I did a little testing with the persistent write-back cache (*1) we
> run ceph quincy 17.2.1 qemu 6.2.0
>
> rbd.fio works with the cache, but as soon as we start we get something like
>
> error: internal error: process exite
On Fri, Jul 1, 2022 at 5:48 PM Konstantin Shalygin wrote:
>
> Hi,
>
> Since Jun 28 04:05:58 postfix/smtpd[567382]: NOQUEUE: reject: RCPT from
> unknown[158.69.70.147]: 450 4.7.25 Client host rejected: cannot find your
> hostname, [158.69.70.147]; from=
> helo=
>
> ipaddr was changed from 158.69
On Fri, Jul 1, 2022 at 10:59 PM Yuri Weinstein wrote:
>
> We've been scraping for octopus PRs for awhile now.
>
> I see only two PRs being on final stages of testing:
>
> https://github.com/ceph/ceph/pull/44731 - Venky is reviewing
> https://github.com/ceph/ceph/pull/46912 - Ilya is reviewing
>
>
On Mon, May 4, 2020 at 7:32 AM Void Star Nill wrote:
>
> Hello,
>
> I wanted to know if rbd will flush any writes in the page cache when a
> volume is "unmap"ed on the host, of if we need to flush explicitly using
> "sync" before unmap?
In effect, yes. rbd doesn't do it itself, but the block lay
Using "profile rbd-read-only" with krbd wouldn't work unless you are
on kernel 5.5 or later. Prior to 5.5, "rbd map" code in the kernel
did some things that are incompatible with "profile rbd-read-only",
such as establishing a watch on the image header and more.
This was overlooked because it is
On Fri, May 29, 2020 at 5:43 PM 李亚锋 wrote:
>
> hi:
>
> I deployed ceph cluster with rdma, it's version is "15.0.0-7282-g05d685d
> (05d685dd37b34f2a015e77124c537f3f8e663152) octopus (dev)".
>
> the cluster status is ok as follows:
>
> [root@node83 lyf]# ceph -s
> cluster:
> id: cd389d63
On Wed, Jun 17, 2020 at 8:51 PM Christoph Ackermann
wrote:
>
> Hi all,
>
> we have a cluster starting from jewel to octopus nowadays. We would like
> to enable Upmap but unfortunately there are some old Jewel clients
> active. We cannot force Upmap by: ceph osd
> set-require-min-compat-client lu
On Wed, Jul 15, 2020 at 1:41 PM Budai Laszlo wrote:
>
> Hi Bobby,
>
> Thank you for your answer. You are saying "Whenever there is a change in the
> map, the monitor will inform the client." Can you please give me some ceph
> documentation link where I could read these details? For me it is logi
On Fri, Aug 7, 2020 at 10:25 PM Void Star Nill wrote:
>
> Hi,
>
> I want to understand the format for `ceph osd blacklist`
> commands. The documentation just says it's the address. But I am not sure
> if it can just be the host IP address or anything else. What does *:0/*
> *3710147553* represent
On Mon, Aug 10, 2020 at 6:14 PM Void Star Nill wrote:
>
> Thanks Ilya.
>
> I assume :0/0 indicates all clients on a given host?
No, a blacklist entry always affects a single client instance.
For clients (as opposed to daemons, e.g. OSDs), the port is 0.
0 is a valid nonce.
Thanks,
On Mon, Aug 31, 2020 at 6:21 PM Shain Miley wrote:
>
> Hi,
> A few weeks ago several of our rdb images became unresponsive after a few of
> our OSDs reached a near full state.
>
> Another member of the team rebooted the server that the rbd images are
> mounted on in an attempt to resolve the iss
On Thu, Sep 17, 2020 at 1:56 PM Marc Boisis wrote:
>
>
> Hi,
>
> I had to map a rbd from an ubuntu Trusty luminous client on an octopus
> cluster.
>
> client dmesg :
> feature set mismatch, my 4a042a42 < server's 14a042a42, missing
> 1
>
> I downgrade my osd tunable to bobtail b
On Wed, Aug 14, 2019 at 2:49 PM Paul Emmerich wrote:
>
> On Wed, Aug 14, 2019 at 2:38 PM Olivier AUDRY wrote:
> > let's test random write
> > rbd -p kube bench kube/bench --io-type write --io-size 8192 --io-threads
> > 256 --io-total 10G --io-pattern rand
> > elapsed: 125 ops: 1310720 ops/s
On Mon, Aug 26, 2019 at 8:25 PM wrote:
>
> What will actually happen if an old client comes by, potential data damage -
> or just broken connections from the client?
The latter (with "libceph: ... feature set mismatch ..." errors).
Thanks,
Ilya
_
On Mon, Aug 26, 2019 at 9:37 PM Frank R wrote:
>
> will 4.13 also work for cephfs?
Upmap works the same for krbd and kcephfs. All upstream kernels
starting with 4.13 (and also RHEL/CentOS kernels starting with 7.5)
support it. If you have a choice which kernel to run, the newer the
better.
Tha
On Thu, Aug 29, 2019 at 11:20 PM Marc Roos wrote:
>
>
> I have this error. I have found the rbd image with the
> block_name_prefix:1f114174b0dc51, how can identify what snapshot this
> is? (Is it a snapshot?)
>
> 2019-08-29 16:16:49.255183 7f9b3f061700 -1 log_channel(cluster) log
> [ERR] : deep-sc
On Tue, Sep 3, 2019 at 6:29 PM Florian Haas wrote:
>
> Hi,
>
> replying to my own message here in a shameless attempt to re-up this. I
> really hope that the list archive can be resurrected in one way or
> another...
Adding David, who managed the transition.
Thanks,
Ilya
___
On Mon, Sep 2, 2019 at 5:39 PM Toby Darling wrote:
>
> Hi
>
> We have a couple of RHEL 7.6 (3.10.0-957.21.3.el7.x86_64) clients that
> have a number of uninterruptible threads and I'm wondering if we're
> looking at the issue fixed by
> https://www.spinics.net/lists/ceph-devel/msg45467.html (the f
Hello,
Yesterday I copied dgallowa in "Heavily-linked lists.ceph.com
pipermail archive now appears to lead to 404s" on ceph-users, but one
of the subscribers reached out to me saying that they do not see him on
the CC.
This looks like a feature of mailman that has bitten others before:
https://l
On Mon, Sep 16, 2019 at 2:20 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Hello,
>
> the current kernel with SLES 12SP3 is:
> ld3195:~ # uname -r
> 4.4.176-94.88-default
>
>
> Assuming that this kernel is not supporting upmap, do you recommend to
> use balance mode crush-compat then?
Hi Tho
On Mon, Sep 16, 2019 at 2:24 PM 潘东元 wrote:
>
> hi,
>my ceph cluster version is Luminous run the kernel version Linux 3.10
>[root@node-1 ~]# ceph features
> {
> "mon": {
> "group": {
> "features": "0x3ffddff8eeacfffb",
> "release": "luminous",
>
On Mon, Sep 16, 2019 at 4:40 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Hi,
>
> thanks for your valuable input.
>
> Question:
> Can I get more information of the 6 clients (those with features
> 0x40106b84a842a42), e.g. IP, that allows me to identify it easily?
Yes, although it's not inte
On Mon, Sep 16, 2019 at 5:10 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Wonderbra.
>
> I found some relevant sessions on 2 of 3 monitor nodes.
> And I found some others:
> root@ld5505:~# ceph daemon mon.ld5505 sessions | grep 0x40106b84a842a42
> root@ld5505:~# ceph daemon mon.ld5505 sessio
On Tue, Sep 17, 2019 at 8:54 AM 潘东元 wrote:
>
> Thank you for your reply.
> so,i would like to verify this problem. i create a new VM as a
> client,it is kernel version:
> [root@localhost ~]# uname -a
> Linux localhost.localdomain 5.2.9-200.fc30.x86_64 #1 SMP Fri Aug 16
> 21:37:45 UTC 2019 x86_64 x
On Tue, Sep 17, 2019 at 2:54 PM Eugen Block wrote:
>
> Hi,
>
> > I have checked the installed ceph version on each client and can confirm
> > that it is:
> > ceph version 12.2 luminous
> >
> > This would drive the conclusion that the ouput of ceph daemon mon.
> > sessions is pointing incorrectly t
On Tue, Oct 1, 2019 at 4:14 PM wrote:
>
> Thanks. Happy to ear that el8 packages will soon be available.
> F.
>
>
> By the way, it seems that mount.ceph is called by mount.
> I already tryed that :
> mount -t ceph 123.456.789.000:6789:/ /data -o
> name=xxx_user,secretfile=/etc/ceph/client.xxx_us
On Mon, Oct 21, 2019 at 5:09 PM Ranjan Ghosh wrote:
>
> Hi all,
>
> it seems Ceph on Ubuntu Disco (19.04) with the most recent kernel
> 5.0.0-32 is instable. It crashes sometimes after a few hours, sometimes
> even after a few minutes. I found this bug here on CoreOS:
>
> https://github.com/coreos
On Mon, Oct 21, 2019 at 6:12 PM Ranjan Ghosh wrote:
>
> Hi Ilya,
>
> thanks for your answer - really helpful! We were so desparate today due
> to this bug that we downgraded to -23. But it's very good to know that
> -31 doesnt contain this bug and we could safely update back to this release.
>
> I
On Thu, Nov 14, 2019 at 8:09 PM Gregory Farnum wrote:
>
> On Thu, Nov 14, 2019 at 9:21 AM Bryan Stillwell
> wrote:
> >
> > There are some bad links to the mailing list subscribe/unsubscribe/archives
> > on this page that should get updated:
> >
> > https://ceph.io/resources/
> >
> > The subscri
On Fri, Nov 15, 2019 at 11:39 AM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Hi,
>
> when I execute this command
> rbd ls -l
> to list all RBDs I get spamming errors:
>
> 2019-11-15 11:29:19.428 7fd852678700 0 SIGN: MSG 1 Sender did not set
> CEPH_MSG_FOOTER_SIGNED.
> 2019-11-15 11:29:19.428
On Mon, Nov 25, 2019 at 1:57 PM Robert Sander
wrote:
>
> Hi,
>
> Am 25.11.19 um 13:36 schrieb Rodrigo Severo - Fábrica:
>
> > I would like to know the expected differences between a FUSE and a kernel
> > mount.
> >
> > Why the 2 options? When should I use one and when should I use the other?
>
>
On Mon, Dec 2, 2019 at 10:27 AM Marc Roos wrote:
>
>
> I have been asking before[1]. Since Nautilus upgrade I am having these,
> with a total node failure as a result(?). Was not expecting this in my
> 'low load' setup. Maybe now someone can help resolving this? I am also
> waiting quite some time
On Mon, Dec 2, 2019 at 12:48 PM Marc Roos wrote:
>
>
> Hi Ilya,
>
> >
> >
> >ISTR there were some anti-spam measures put in place. Is your account
> >waiting for manual approval? If so, David should be able to help.
>
> Yes if I remember correctly I get waiting approval when I try to log in.
On Mon, Dec 2, 2019 at 1:23 PM Marc Roos wrote:
>
>
>
> I guess this is related? kworker 100%
>
>
> [Mon Dec 2 13:05:27 2019] SysRq : Show backtrace of all active CPUs
> [Mon Dec 2 13:05:27 2019] sending NMI to all CPUs:
> [Mon Dec 2 13:05:27 2019] NMI backtrace for cpu 0 skipped: idling at pc
On Tue, Dec 10, 2019 at 10:45 AM Abhishek Lekshmanan wrote:
>
> This is the fifth release of the Ceph Nautilus release series. Among the many
> notable changes, this release fixes a critical BlueStore bug that was
> introduced
> in 14.2.3. All Nautilus users are advised to upgrade to this release
On Tue, Jan 14, 2020 at 10:31 AM Marc Roos wrote:
>
>
> I think this is new since I upgraded to 14.2.6. kworker/7:3 100%
>
> [@~]# echo l > /proc/sysrq-trigger
>
> [Tue Jan 14 10:05:08 2020] CPU: 7 PID: 2909400 Comm: kworker/7:0 Not
> tainted 3.10.0-1062.4.3.el7.x86_64 #1
>
> [Tue Jan 14 10:05:08
On Thu, Jan 23, 2020 at 2:36 PM Ilya Dryomov wrote:
>
> On Wed, Jan 22, 2020 at 6:18 PM Hayashida, Mami
> wrote:
> >
> > Thanks, Ilya.
> >
> > I just tried modifying the osd cap for client.testuser by getting rid of
> > "tag cephfs data=cephfs_test
On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
>
> Hello,
>
> On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook
> stable-4.0, I have an issue with cephfs. I can create a folder, I can
> create empty files, but cannot write data on like I'm not allowed to write to
> the
On Thu, Jan 23, 2020 at 3:31 PM Hayashida, Mami wrote:
>
> Thanks, Ilya.
>
> First, I was not sure whether to post my question on @ceph.io or
> @lists.ceph.com (I subscribe to both) -- should I use @ceph.io in the future?
Yes. I got the following when I replied to your previous email:
As you
On Fri, Jan 24, 2020 at 2:10 PM Yoann Moulin wrote:
>
> Le 23.01.20 à 15:51, Ilya Dryomov a écrit :
> > On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
> >>
> >> Hello,
> >>
> >> On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible
On Sat, Jan 25, 2020 at 8:42 AM Ilya Dryomov wrote:
>
> On Fri, Jan 24, 2020 at 2:10 PM Yoann Moulin wrote:
> >
> > Le 23.01.20 à 15:51, Ilya Dryomov a écrit :
> > > On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
> > >>
> > >> Hello
On Fri, Jan 24, 2020 at 1:43 PM Frank Schilder wrote:
>
> Dear Ilya,
>
> I had exactly the same problem with authentication of cephfs clients on a
> mimic-13.2.2 cluster. The key created with "ceph fs authorize ..." did not
> grant access to the data pool. I ended up adding "rw" access to this p
On Fri, Jan 31, 2020 at 11:06 AM Dan van der Ster wrote:
>
> Hi all,
>
> We are quite regularly (a couple times per week) seeing:
>
> HEALTH_WARN 1 clients failing to respond to capability release; 1 MDSs
> report slow requests
> MDS_CLIENT_LATE_RELEASE 1 clients failing to respond to capability r
On Fri, Jan 31, 2020 at 4:57 PM Dan van der Ster wrote:
>
> Hi Ilya,
>
> On Fri, Jan 31, 2020 at 11:33 AM Ilya Dryomov wrote:
> >
> > On Fri, Jan 31, 2020 at 11:06 AM Dan van der Ster
> > wrote:
> > >
> > > Hi all,
> > >
>
On Mon, Feb 3, 2020 at 10:38 AM Dan van der Ster wrote:
>
> On Fri, Jan 31, 2020 at 6:32 PM Ilya Dryomov wrote:
> >
> > On Fri, Jan 31, 2020 at 4:57 PM Dan van der Ster
> > wrote:
> > >
> > > Hi Ilya,
> > >
> > > On Fri, Jan 31, 2020
On Fri, Feb 14, 2020 at 3:19 PM Marc Roos wrote:
>
>
> I have default centos7 setup with nautilus. I have been asked to install
> 5.5 to check a 'bug'. Where should I get this from? I read that the
> elrepo kernel is not compiled like rhel.
Hi Marc,
I'm not sure what you mean by "not compiled li
On Fri, Feb 14, 2020 at 12:20 PM Stolte, Felix wrote:
>
> Hi guys,
>
> I am exporting cephfs with samba using the vfs acl_xattr which stores ntfs
> acls in the security extended attributes. This works fine using cephfs kernel
> mount wither kernel version 4.15.
>
> Using kernel 5.3 I cannot acce
On Sun, Mar 8, 2020 at 5:13 PM M Ranga Swami Reddy wrote:
>
> Iam using the Luminous 12.2.11 version with prometheus.
>
> On Sun, Mar 8, 2020 at 12:28 PM XuYun wrote:
>
> > You can enable prometheus module of mgr if you are running Nautilus.
> >
> > > 2020年3月8日 上午2:15,M Ranga Swami Reddy 写道:
> >
On Tue, Apr 7, 2020 at 6:49 PM Void Star Nill wrote:
>
> Hello All,
>
> Is there a way to specify that a lock (shared or exclusive) on an rbd
> volume be released if the client machine becomes unreachable or
> irresponsive?
>
> In one of our clusters, we use rbd locks on volumes to make sure provi
A note of caution, though. "rbd status" just lists watches on the
image header object and a watch is not a reliable indicator of whether
the image is mapped somewhere or not.
It is true that all read-write mappings establish a watch, but it can
come and go due to network partitions, OSD crashes o
orchestration happens in a distributed manner across all our
>> compute nodes - so it is not easy to determine when we should kick out the
>> dead connections and claim the lock. We need to intervene manually and
>> resolve such issues as of now. So I am looking for a way to do
here a more deterministic way to know where the volumes are
> mapped to?
>
> Thanks,
> Shridhar
>
>
> On Wed, 8 Apr 2020 at 03:06, Ilya Dryomov wrote:
>>
>> A note of caution, though. "rbd status" just lists watches on the
>> image header object an
On Sat, Apr 18, 2020 at 6:53 AM Void Star Nill wrote:
>
> Hello,
>
> How frequently do RBD device names get reused? For instance, when I map a
> volume on a client and it gets mapped to /dev/rbd0 and when it is unmapped,
> does a subsequent map reuse this name right away?
Yes.
>
> I ask this que
On Sat, Nov 2, 2024 at 4:21 PM Yuri Weinstein wrote:
>
> Ilya,
>
> rbd rerunning
>
> https://github.com/ceph/ceph/pull/60586/ merged and cherry-picked into
> quincy-release
rbd and krbd approved based on additional reruns:
https://pulpito.ceph.com/dis-2024-11-04_17:34:41-rbd-quincy-release-distr
On Fri, Nov 1, 2024 at 4:22 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/68643#note-2
> Release Notes - TBD
>
> Seeking approvals/reviews for:
>
> rados - Laura, Radek, Travis, Ernesto, Adam King
>
> rgw - Casey
> fs - Venky
> orch -
On Fri, Oct 25, 2024 at 11:03 AM Friedrich Weber wrote:
>
> Hi,
>
> Some of our Proxmox VE users have noticed that a large fstrim inside a
> QEMU/KVM guest does not free up as much space as expected on the backing
> RBD image -- if the image is mapped on the host via KRBD and passed to
> QEMU as a
On Fri, Dec 27, 2024 at 5:31 PM Yuri Weinstein wrote:
>
> Hello and Happy Holidays all!
>
> We have merged several PRs (mostly in rgw and rbd areas) and I built a
> new build 2 (rebase)
>
> https://tracker.ceph.com/issues/69234#note-1
>
> Please provide trackers for failures so we avoid duplicates
On Mon, Dec 16, 2024 at 6:27 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/69234#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
> Gibba upgrade -TBD
>
> Please provide tracks for failures so we avoid duplicates.
> Seeking approval
On Thu, Dec 12, 2024 at 5:37 PM Friedrich Weber wrote:
>
> Hi Ilya,
>
> some of our Proxmox VE users also report they need to enable rxbounce to
> avoid their Windows VMs triggering these errors, see e.g. [1]. With
> rxbounce, everything seems to work smoothly, so thanks for adding this
> option.
On Mon, Mar 24, 2025 at 10:40 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/70563#note-1
> Release Notes - TBD
> LRC upgrade - TBD
>
> Seeking approvals/reviews for:
>
> smoke - Laura approved?
>
> rados - Radek, Laura approved? Travi
On Fri, May 2, 2025 at 2:03 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/71166
> Release Notes - https://github.com/ceph/ceph/pull/63090
> LRC upgrade - N/A
>
> Seeking approvals/reviews for:
>
> smoke - same as in 18.2.5
> rados - L
On Thu, Apr 24, 2025 at 11:29 PM Dominique Ramaekers
wrote:
>
> Hi,
>
> Weird thing happened with my ceph. I've got nice nightly scripts (bash
> scripts) for making backups, snapshots, cleaning up etc... Starting from my
> last upgrade to ceph v19.2.2 my scripts hang during execution. The rbd ma
On Fri, Apr 25, 2025 at 1:02 PM Dominique Ramaekers
wrote:
>
> Hi Ilya,
>
> Thanks for the tip! Altough it seems less 'good practice' and I'm worried
> about stability because ceph gives this output:
> rbd: mapping succeeded but /dev/rbd0 is not accessible, is host /dev mounted?
> In some cases u
On Thu, Jun 19, 2025 at 9:54 AM Julien Laurenceau
wrote:
>
> Thanks Ilya,
>
> Yeah maybe it should be made more explicit in the doc, but from the user
> point of view it's odd to have this limitation.
>
> Regarding the kubernetes-csi-addons that enable to do VolumeReplication ,
> promote, demote
On Tue, Jun 3, 2025 at 7:08 AM Sake Ceph wrote:
>
> This is also the case with us for Cephfs clients. We use the kernel mount,
> but still you need to install ceph-common. Which isn't possible with the
> latest version of Reef and thus those are stuck on 18.2.4.
> Or am I doing something wrong?
On Wed, Jun 18, 2025 at 7:29 PM Julien Laurenceau
wrote:
>
> Hi,
>
> I have 2 running ceph squid clusters (19.2.2).
> On each cluster there is an rbd pool named k8s-1 that I want to mirror
> using the snapshot based mode.
>
> if I configure both clusters in two way mirroring using the mode=image
On Thu, Jul 3, 2025 at 4:56 PM Yuri Weinstein wrote:
>
> Corrected the subject line
>
> On Thu, Jul 3, 2025 at 7:36 AM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/71912#note-1
> >
> > Release Notes - TBD
> > LRC upgrade - TBD
when the volume was not usable, but
> nothing evident.
Are you saying that "rbd lock ls" on the image immediately after
powering on the hypervisor produces no output?
Thanks,
Ilya
>
> Cheers,
> Gary
>
>
>
> On 2025-06-26 9:33 a.m., Ilya Dryomo
On Wed, Jul 2, 2025 at 1:36 PM Gary Molenkamp wrote:
>
> I confirmed and can consistently replicate the failure event that forces
> the object-map rebuild.
>
> If the VM is terminated cleanly, such as a hypervisor reboot, then the
> VMs and their rbd volumes are all well.
> If the hypervisor goes
On Thu, Jul 3, 2025 at 6:37 PM Yuri Weinstein wrote:
>
> Hi Ilya
>
> Rerun scheduled
rbd approved.
Thanks,
Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Tue, Jun 24, 2025 at 11:19 PM Gary Molenkamp wrote:
>
> We use ceph rbd as a volume service for both an Openstack deployment and
> a series of Proxmox servers. This ceph deployment started as a Hammer
> release and has been upgraded over the years to where it is now running
> Quincy. It has be
On Tue, Jul 22, 2025 at 4:54 PM Dan O'Brien wrote:
>
> Ilya Dryomov wrote:
> > Have you tried loading the module wih "modprobe ceph"?
> >
> > Thanks,
> >
> > Ilya
>
> I had not! That did the trick, at least partly. The mod
On Mon, Jul 21, 2025 at 10:03 PM Dan O'Brien wrote:
>
> Malte Stroem wrote:
> > there is no need for ceph-common.
> >
> > You can mount the CephFS with the mount command because the Ceph kernel
> > client is part of the kernel for a long time now.
> >
> > mount -t cephfs...
> >
> > just works.
> >
On Tue, Jul 29, 2025 at 10:25 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/72316#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
>
> Dev Leads, please review the list of suites for completeness, as this is a
> new release.
>
> Se
201 - 283 of 283 matches
Mail list logo