[ceph-users] Re: rbd command hangs

2022-05-24 Thread Ilya Dryomov
On Tue, May 24, 2022 at 5:20 PM Sopena Ballesteros Manuel wrote: > > Hi Ilya, > > > thank you very much for your prompt response, > > > Any rbd command variation is affected (mapping device included) > > We are using a physical machine (no container involved) > > > Below is the output of the runni

[ceph-users] Re: rbd command hangs

2022-05-24 Thread Ilya Dryomov
On Tue, May 24, 2022 at 8:14 PM Sopena Ballesteros Manuel wrote: > > yes dmesg shows the following: > > ... > > [23661.367449] rbd: rbd12: failed to lock header: -13 > [23661.367968] rbd: rbd2: no lock owners detected > [23661.369306] rbd: rbd11: no lock owners detected > [23661.370068] rbd: rbd11

[ceph-users] Re: rbd command hangs

2022-05-25 Thread Ilya Dryomov
On Wed, May 25, 2022 at 9:21 AM Sopena Ballesteros Manuel wrote: > > attached, > > > nid001388:~ # ceph auth get client.noir > 2022-05-25T09:20:00.731+0200 7f81f63f3700 -1 auth: unable to find a keyring > on > /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-15 Thread Ilya Dryomov
On Tue, Jun 14, 2022 at 7:21 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/55974 > Release Notes - https://github.com/ceph/ceph/pull/46576 > > Seeking approvals for: > > rados - Neha, Travis, Ernesto, Adam > rgw - Casey > fs - Venky,

[ceph-users] Re: rbd resize thick provisioned image

2022-06-15 Thread Ilya Dryomov
On Wed, Jun 15, 2022 at 3:21 PM Frank Schilder wrote: > > Hi Eugen, > > in essence I would like the property "thick provisioned" to be sticky after > creation and apply to any other operation that would be affected. > > To answer the use-case question: this is a disk image on a pool designed for

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-20 Thread Ilya Dryomov
On Sun, Jun 19, 2022 at 6:13 PM Yuri Weinstein wrote: > > rados, rgw, rbd and fs suits ran on the latest sha1 > (https://shaman.ceph.com/builds/ceph/quincy-release/eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2/) > > pls see the summary: > https://tracker.ceph.com/issues/55974#note-1 > > seeking final

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Ilya Dryomov
On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote: > > Hi, > > > we noticed that some of our long running VMs (1 year without migration) seem > to have a very slow memory leak. Taking a dump of the leaked memory revealed > that it seemed to contain osd and pool information so we concluded that

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Ilya Dryomov
On Wed, Jun 22, 2022 at 11:14 AM Peter Lieven wrote: > > > > Von meinem iPhone gesendet > > > Am 22.06.2022 um 10:35 schrieb Ilya Dryomov : > > > > On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote: > >> > >> Hi, > >> > >&

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-23 Thread Ilya Dryomov
On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote: > > Am 22.06.22 um 15:46 schrieb Josh Baergen: > > Hey Peter, > > > >> I found relatively large allocations in the qemu smaps and checked the > >> contents. It contained several hundred repetitions of osd and pool names. > >> We use the defaul

[ceph-users] Re: persistent write-back cache and quemu

2022-07-01 Thread Ilya Dryomov
On Fri, Jul 1, 2022 at 8:32 AM Ansgar Jazdzewski wrote: > > Hi folks, > > I did a little testing with the persistent write-back cache (*1) we > run ceph quincy 17.2.1 qemu 6.2.0 > > rbd.fio works with the cache, but as soon as we start we get something like > > error: internal error: process exite

[ceph-users] Re: Broken PTR record for new Ceph Redmine IP

2022-07-01 Thread Ilya Dryomov
On Fri, Jul 1, 2022 at 5:48 PM Konstantin Shalygin wrote: > > Hi, > > Since Jun 28 04:05:58 postfix/smtpd[567382]: NOQUEUE: reject: RCPT from > unknown[158.69.70.147]: 450 4.7.25 Client host rejected: cannot find your > hostname, [158.69.70.147]; from= > helo= > > ipaddr was changed from 158.69

[ceph-users] Re: Next (last) octopus point release

2022-07-04 Thread Ilya Dryomov
On Fri, Jul 1, 2022 at 10:59 PM Yuri Weinstein wrote: > > We've been scraping for octopus PRs for awhile now. > > I see only two PRs being on final stages of testing: > > https://github.com/ceph/ceph/pull/44731 - Venky is reviewing > https://github.com/ceph/ceph/pull/46912 - Ilya is reviewing > >

[ceph-users] Re: page cache flush before unmap?

2020-05-04 Thread Ilya Dryomov
On Mon, May 4, 2020 at 7:32 AM Void Star Nill wrote: > > Hello, > > I wanted to know if rbd will flush any writes in the page cache when a > volume is "unmap"ed on the host, of if we need to flush explicitly using > "sync" before unmap? In effect, yes. rbd doesn't do it itself, but the block lay

[ceph-users] Re: mount issues with rbd running xfs - Structure needs cleaning

2020-05-04 Thread Ilya Dryomov
Using "profile rbd-read-only" with krbd wouldn't work unless you are on kernel 5.5 or later. Prior to 5.5, "rbd map" code in the kernel did some things that are incompatible with "profile rbd-read-only", such as establishing a watch on the image header and more. This was overlooked because it is

[ceph-users] Re: ceph with rdma can not mount with kernel

2020-05-29 Thread Ilya Dryomov
On Fri, May 29, 2020 at 5:43 PM 李亚锋 wrote: > > hi: > > I deployed ceph cluster with rdma, it's version is "15.0.0-7282-g05d685d > (05d685dd37b34f2a015e77124c537f3f8e663152) octopus (dev)". > > the cluster status is ok as follows: > > [root@node83 lyf]# ceph -s > cluster: > id: cd389d63

[ceph-users] Re: Jewel clients on recent cluster

2020-06-18 Thread Ilya Dryomov
On Wed, Jun 17, 2020 at 8:51 PM Christoph Ackermann wrote: > > Hi all, > > we have a cluster starting from jewel to octopus nowadays. We would like > to enable Upmap but unfortunately there are some old Jewel clients > active. We cannot force Upmap by: ceph osd > set-require-min-compat-client lu

[ceph-users] Re: client - monitor communication.

2020-07-16 Thread Ilya Dryomov
On Wed, Jul 15, 2020 at 1:41 PM Budai Laszlo wrote: > > Hi Bobby, > > Thank you for your answer. You are saying "Whenever there is a change in the > map, the monitor will inform the client." Can you please give me some ceph > documentation link where I could read these details? For me it is logi

[ceph-users] Re: EntityAddress format in ceph ssd blacklist commands

2020-08-10 Thread Ilya Dryomov
On Fri, Aug 7, 2020 at 10:25 PM Void Star Nill wrote: > > Hi, > > I want to understand the format for `ceph osd blacklist` > commands. The documentation just says it's the address. But I am not sure > if it can just be the host IP address or anything else. What does *:0/* > *3710147553* represent

[ceph-users] Re: EntityAddress format in ceph ssd blacklist commands

2020-08-10 Thread Ilya Dryomov
On Mon, Aug 10, 2020 at 6:14 PM Void Star Nill wrote: > > Thanks Ilya. > > I assume :0/0 indicates all clients on a given host? No, a blacklist entry always affects a single client instance. For clients (as opposed to daemons, e.g. OSDs), the port is 0. 0 is a valid nonce. Thanks,

[ceph-users] Re: Xfs kernel panic during rbd mount

2020-08-31 Thread Ilya Dryomov
On Mon, Aug 31, 2020 at 6:21 PM Shain Miley wrote: > > Hi, > A few weeks ago several of our rdb images became unresponsive after a few of > our OSDs reached a near full state. > > Another member of the team rebooted the server that the rbd images are > mounted on in an attempt to resolve the iss

[ceph-users] Re: rbd map on octopus from luminous client

2020-09-17 Thread Ilya Dryomov
On Thu, Sep 17, 2020 at 1:56 PM Marc Boisis wrote: > > > Hi, > > I had to map a rbd from an ubuntu Trusty luminous client on an octopus > cluster. > > client dmesg : > feature set mismatch, my 4a042a42 < server's 14a042a42, missing > 1 > > I downgrade my osd tunable to bobtail b

[ceph-users] Re: Mapped rbd is very slow

2019-08-14 Thread Ilya Dryomov
On Wed, Aug 14, 2019 at 2:49 PM Paul Emmerich wrote: > > On Wed, Aug 14, 2019 at 2:38 PM Olivier AUDRY wrote: > > let's test random write > > rbd -p kube bench kube/bench --io-type write --io-size 8192 --io-threads > > 256 --io-total 10G --io-pattern rand > > elapsed: 125 ops: 1310720 ops/s

[ceph-users] Re: krdb upmap compatibility

2019-08-26 Thread Ilya Dryomov
On Mon, Aug 26, 2019 at 8:25 PM wrote: > > What will actually happen if an old client comes by, potential data damage - > or just broken connections from the client? The latter (with "libceph: ... feature set mismatch ..." errors). Thanks, Ilya _

[ceph-users] Re: krdb upmap compatibility

2019-08-26 Thread Ilya Dryomov
On Mon, Aug 26, 2019 at 9:37 PM Frank R wrote: > > will 4.13 also work for cephfs? Upmap works the same for krbd and kcephfs. All upstream kernels starting with 4.13 (and also RHEL/CentOS kernels starting with 7.5) support it. If you have a choice which kernel to run, the newer the better. Tha

[ceph-users] Re: Identify rbd snapshot

2019-08-30 Thread Ilya Dryomov
On Thu, Aug 29, 2019 at 11:20 PM Marc Roos wrote: > > > I have this error. I have found the rbd image with the > block_name_prefix:1f114174b0dc51, how can identify what snapshot this > is? (Is it a snapshot?) > > 2019-08-29 16:16:49.255183 7f9b3f061700 -1 log_channel(cluster) log > [ERR] : deep-sc

[ceph-users] Re: Heavily-linked lists.ceph.com pipermail archive now appears to lead to 404s

2019-09-03 Thread Ilya Dryomov
On Tue, Sep 3, 2019 at 6:29 PM Florian Haas wrote: > > Hi, > > replying to my own message here in a shameless attempt to re-up this. I > really hope that the list archive can be resurrected in one way or > another... Adding David, who managed the transition. Thanks, Ilya ___

[ceph-users] Re: TASK_UNINTERRUPTIBLE kernel client threads

2019-09-03 Thread Ilya Dryomov
On Mon, Sep 2, 2019 at 5:39 PM Toby Darling wrote: > > Hi > > We have a couple of RHEL 7.6 (3.10.0-957.21.3.el7.x86_64) clients that > have a number of uninterruptible threads and I'm wondering if we're > looking at the issue fixed by > https://www.spinics.net/lists/ceph-devel/msg45467.html (the f

[ceph-users] Proposal to disable "Avoid Duplicates" on all ceph.io lists

2019-09-04 Thread Ilya Dryomov
Hello, Yesterday I copied dgallowa in "Heavily-linked lists.ceph.com pipermail archive now appears to lead to 404s" on ceph-users, but one of the subscribers reached out to me saying that they do not see him on the CC. This looks like a feature of mailman that has bitten others before: https://l

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 2:20 PM Thomas Schneider <74cmo...@gmail.com> wrote: > > Hello, > > the current kernel with SLES 12SP3 is: > ld3195:~ # uname -r > 4.4.176-94.88-default > > > Assuming that this kernel is not supporting upmap, do you recommend to > use balance mode crush-compat then? Hi Tho

[ceph-users] Re: KRBD use Luminous upmap feature.Which version of the kernel should i ues?

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 2:24 PM 潘东元 wrote: > > hi, >my ceph cluster version is Luminous run the kernel version Linux 3.10 >[root@node-1 ~]# ceph features > { > "mon": { > "group": { > "features": "0x3ffddff8eeacfffb", > "release": "luminous", >

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 4:40 PM Thomas Schneider <74cmo...@gmail.com> wrote: > > Hi, > > thanks for your valuable input. > > Question: > Can I get more information of the 6 clients (those with features > 0x40106b84a842a42), e.g. IP, that allows me to identify it easily? Yes, although it's not inte

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 5:10 PM Thomas Schneider <74cmo...@gmail.com> wrote: > > Wonderbra. > > I found some relevant sessions on 2 of 3 monitor nodes. > And I found some others: > root@ld5505:~# ceph daemon mon.ld5505 sessions | grep 0x40106b84a842a42 > root@ld5505:~# ceph daemon mon.ld5505 sessio

[ceph-users] Re: KRBD use Luminous upmap feature.Which version of the kernel should i ues?

2019-09-17 Thread Ilya Dryomov
On Tue, Sep 17, 2019 at 8:54 AM 潘东元 wrote: > > Thank you for your reply. > so,i would like to verify this problem. i create a new VM as a > client,it is kernel version: > [root@localhost ~]# uname -a > Linux localhost.localdomain 5.2.9-200.fc30.x86_64 #1 SMP Fri Aug 16 > 21:37:45 UTC 2019 x86_64 x

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-17 Thread Ilya Dryomov
On Tue, Sep 17, 2019 at 2:54 PM Eugen Block wrote: > > Hi, > > > I have checked the installed ceph version on each client and can confirm > > that it is: > > ceph version 12.2 luminous > > > > This would drive the conclusion that the ouput of ceph daemon mon. > > sessions is pointing incorrectly t

[ceph-users] Re: Ceph and centos 8

2019-10-01 Thread Ilya Dryomov
On Tue, Oct 1, 2019 at 4:14 PM wrote: > > Thanks. Happy to ear that el8 packages will soon be available. > F. > > > By the way, it seems that mount.ceph is called by mount. > I already tryed that : > mount -t ceph 123.456.789.000:6789:/ /data -o > name=xxx_user,secretfile=/etc/ceph/client.xxx_us

[ceph-users] Re: Ubuntu Disco with most recent Kernel 5.0.0-32 seems to be instable

2019-10-21 Thread Ilya Dryomov
On Mon, Oct 21, 2019 at 5:09 PM Ranjan Ghosh wrote: > > Hi all, > > it seems Ceph on Ubuntu Disco (19.04) with the most recent kernel > 5.0.0-32 is instable. It crashes sometimes after a few hours, sometimes > even after a few minutes. I found this bug here on CoreOS: > > https://github.com/coreos

[ceph-users] Re: Ubuntu Disco with most recent Kernel 5.0.0-32 seems to be instable

2019-10-21 Thread Ilya Dryomov
On Mon, Oct 21, 2019 at 6:12 PM Ranjan Ghosh wrote: > > Hi Ilya, > > thanks for your answer - really helpful! We were so desparate today due > to this bug that we downgraded to -23. But it's very good to know that > -31 doesnt contain this bug and we could safely update back to this release. > > I

[ceph-users] Re: Bad links on ceph.io for mailing lists

2019-11-14 Thread Ilya Dryomov
On Thu, Nov 14, 2019 at 8:09 PM Gregory Farnum wrote: > > On Thu, Nov 14, 2019 at 9:21 AM Bryan Stillwell > wrote: > > > > There are some bad links to the mailing list subscribe/unsubscribe/archives > > on this page that should get updated: > > > > https://ceph.io/resources/ > > > > The subscri

[ceph-users] Re: Cannot list RBDs in any pool / cannot mount any RBD

2019-11-15 Thread Ilya Dryomov
On Fri, Nov 15, 2019 at 11:39 AM Thomas Schneider <74cmo...@gmail.com> wrote: > > Hi, > > when I execute this command > rbd ls -l > to list all RBDs I get spamming errors: > > 2019-11-15 11:29:19.428 7fd852678700 0 SIGN: MSG 1 Sender did not set > CEPH_MSG_FOOTER_SIGNED. > 2019-11-15 11:29:19.428

[ceph-users] Re: FUSE X kernel mounts

2019-11-25 Thread Ilya Dryomov
On Mon, Nov 25, 2019 at 1:57 PM Robert Sander wrote: > > Hi, > > Am 25.11.19 um 13:36 schrieb Rodrigo Severo - Fábrica: > > > I would like to know the expected differences between a FUSE and a kernel > > mount. > > > > Why the 2 options? When should I use one and when should I use the other? > >

[ceph-users] Re: ceph node crashed with these errors "kernel: ceph: build_snap_context" (maybe now it is urgent?)

2019-12-02 Thread Ilya Dryomov
On Mon, Dec 2, 2019 at 10:27 AM Marc Roos wrote: > > > I have been asking before[1]. Since Nautilus upgrade I am having these, > with a total node failure as a result(?). Was not expecting this in my > 'low load' setup. Maybe now someone can help resolving this? I am also > waiting quite some time

[ceph-users] Re: ceph node crashed with these errors "kernel: ceph: build_snap_context" (maybe now it is urgent?)

2019-12-02 Thread Ilya Dryomov
On Mon, Dec 2, 2019 at 12:48 PM Marc Roos wrote: > > > Hi Ilya, > > > > > > >ISTR there were some anti-spam measures put in place. Is your account > >waiting for manual approval? If so, David should be able to help. > > Yes if I remember correctly I get waiting approval when I try to log in.

[ceph-users] Re: ceph node crashed with these errors "kernel: ceph: build_snap_context" (maybe now it is urgent?)

2019-12-02 Thread Ilya Dryomov
On Mon, Dec 2, 2019 at 1:23 PM Marc Roos wrote: > > > > I guess this is related? kworker 100% > > > [Mon Dec 2 13:05:27 2019] SysRq : Show backtrace of all active CPUs > [Mon Dec 2 13:05:27 2019] sending NMI to all CPUs: > [Mon Dec 2 13:05:27 2019] NMI backtrace for cpu 0 skipped: idling at pc

[ceph-users] Re: v14.2.5 Nautilus released

2019-12-10 Thread Ilya Dryomov
On Tue, Dec 10, 2019 at 10:45 AM Abhishek Lekshmanan wrote: > > This is the fifth release of the Ceph Nautilus release series. Among the many > notable changes, this release fixes a critical BlueStore bug that was > introduced > in 14.2.3. All Nautilus users are advised to upgrade to this release

[ceph-users] Re: Kworker 100% with ceph-msgr (after upgrade to 14.2.6?)

2020-01-14 Thread Ilya Dryomov
On Tue, Jan 14, 2020 at 10:31 AM Marc Roos wrote: > > > I think this is new since I upgraded to 14.2.6. kworker/7:3 100% > > [@~]# echo l > /proc/sysrq-trigger > > [Tue Jan 14 10:05:08 2020] CPU: 7 PID: 2909400 Comm: kworker/7:0 Not > tainted 3.10.0-1062.4.3.el7.x86_64 #1 > > [Tue Jan 14 10:05:08

[ceph-users] Re: CephFS with cache-tier kernel-mount client unable to write (Nautilus)

2020-01-23 Thread Ilya Dryomov
On Thu, Jan 23, 2020 at 2:36 PM Ilya Dryomov wrote: > > On Wed, Jan 22, 2020 at 6:18 PM Hayashida, Mami > wrote: > > > > Thanks, Ilya. > > > > I just tried modifying the osd cap for client.testuser by getting rid of > > "tag cephfs data=cephfs_test

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-23 Thread Ilya Dryomov
On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote: > > Hello, > > On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook > stable-4.0, I have an issue with cephfs. I can create a folder, I can > create empty files, but cannot write data on like I'm not allowed to write to > the

[ceph-users] Re: CephFS with cache-tier kernel-mount client unable to write (Nautilus)

2020-01-23 Thread Ilya Dryomov
On Thu, Jan 23, 2020 at 3:31 PM Hayashida, Mami wrote: > > Thanks, Ilya. > > First, I was not sure whether to post my question on @ceph.io or > @lists.ceph.com (I subscribe to both) -- should I use @ceph.io in the future? Yes. I got the following when I replied to your previous email: As you

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-24 Thread Ilya Dryomov
On Fri, Jan 24, 2020 at 2:10 PM Yoann Moulin wrote: > > Le 23.01.20 à 15:51, Ilya Dryomov a écrit : > > On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote: > >> > >> Hello, > >> > >> On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-24 Thread Ilya Dryomov
On Sat, Jan 25, 2020 at 8:42 AM Ilya Dryomov wrote: > > On Fri, Jan 24, 2020 at 2:10 PM Yoann Moulin wrote: > > > > Le 23.01.20 à 15:51, Ilya Dryomov a écrit : > > > On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote: > > >> > > >> Hello

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-25 Thread Ilya Dryomov
On Fri, Jan 24, 2020 at 1:43 PM Frank Schilder wrote: > > Dear Ilya, > > I had exactly the same problem with authentication of cephfs clients on a > mimic-13.2.2 cluster. The key created with "ceph fs authorize ..." did not > grant access to the data pool. I ended up adding "rw" access to this p

[ceph-users] Re: kernel client osdc ops stuck and mds slow reqs

2020-01-31 Thread Ilya Dryomov
On Fri, Jan 31, 2020 at 11:06 AM Dan van der Ster wrote: > > Hi all, > > We are quite regularly (a couple times per week) seeing: > > HEALTH_WARN 1 clients failing to respond to capability release; 1 MDSs > report slow requests > MDS_CLIENT_LATE_RELEASE 1 clients failing to respond to capability r

[ceph-users] Re: kernel client osdc ops stuck and mds slow reqs

2020-01-31 Thread Ilya Dryomov
On Fri, Jan 31, 2020 at 4:57 PM Dan van der Ster wrote: > > Hi Ilya, > > On Fri, Jan 31, 2020 at 11:33 AM Ilya Dryomov wrote: > > > > On Fri, Jan 31, 2020 at 11:06 AM Dan van der Ster > > wrote: > > > > > > Hi all, > > > >

[ceph-users] Re: kernel client osdc ops stuck and mds slow reqs

2020-02-03 Thread Ilya Dryomov
On Mon, Feb 3, 2020 at 10:38 AM Dan van der Ster wrote: > > On Fri, Jan 31, 2020 at 6:32 PM Ilya Dryomov wrote: > > > > On Fri, Jan 31, 2020 at 4:57 PM Dan van der Ster > > wrote: > > > > > > Hi Ilya, > > > > > > On Fri, Jan 31, 2020

[ceph-users] Re: centos7 / nautilus where to get kernel 5.5 from?

2020-02-14 Thread Ilya Dryomov
On Fri, Feb 14, 2020 at 3:19 PM Marc Roos wrote: > > > I have default centos7 setup with nautilus. I have been asked to install > 5.5 to check a 'bug'. Where should I get this from? I read that the > elrepo kernel is not compiled like rhel. Hi Marc, I'm not sure what you mean by "not compiled li

[ceph-users] Re: Extended security attributes on cephfs (nautilus) not working with kernel 5.3

2020-02-14 Thread Ilya Dryomov
On Fri, Feb 14, 2020 at 12:20 PM Stolte, Felix wrote: > > Hi guys, > > I am exporting cephfs with samba using the vfs acl_xattr which stores ntfs > acls in the security extended attributes. This works fine using cephfs kernel > mount wither kernel version 4.15. > > Using kernel 5.3 I cannot acce

[ceph-users] Re: ceph rbd volumes/images IO details

2020-03-09 Thread Ilya Dryomov
On Sun, Mar 8, 2020 at 5:13 PM M Ranga Swami Reddy wrote: > > Iam using the Luminous 12.2.11 version with prometheus. > > On Sun, Mar 8, 2020 at 12:28 PM XuYun wrote: > > > You can enable prometheus module of mgr if you are running Nautilus. > > > > > 2020年3月8日 上午2:15,M Ranga Swami Reddy 写道: > >

[ceph-users] Re: Fwd: question on rbd locks

2020-04-08 Thread Ilya Dryomov
On Tue, Apr 7, 2020 at 6:49 PM Void Star Nill wrote: > > Hello All, > > Is there a way to specify that a lock (shared or exclusive) on an rbd > volume be released if the client machine becomes unreachable or > irresponsive? > > In one of our clusters, we use rbd locks on volumes to make sure provi

[ceph-users] Re: Fwd: Question on rbd maps

2020-04-08 Thread Ilya Dryomov
A note of caution, though. "rbd status" just lists watches on the image header object and a watch is not a reliable indicator of whether the image is mapped somewhere or not. It is true that all read-write mappings establish a watch, but it can come and go due to network partitions, OSD crashes o

[ceph-users] Re: Fwd: question on rbd locks

2020-04-13 Thread Ilya Dryomov
orchestration happens in a distributed manner across all our >> compute nodes - so it is not easy to determine when we should kick out the >> dead connections and claim the lock. We need to intervene manually and >> resolve such issues as of now. So I am looking for a way to do

[ceph-users] Re: Fwd: Question on rbd maps

2020-04-13 Thread Ilya Dryomov
here a more deterministic way to know where the volumes are > mapped to? > > Thanks, > Shridhar > > > On Wed, 8 Apr 2020 at 03:06, Ilya Dryomov wrote: >> >> A note of caution, though. "rbd status" just lists watches on the >> image header object an

[ceph-users] Re: rbd device name reuse frequency

2020-04-20 Thread Ilya Dryomov
On Sat, Apr 18, 2020 at 6:53 AM Void Star Nill wrote: > > Hello, > > How frequently do RBD device names get reused? For instance, when I map a > volume on a client and it gets mapped to /dev/rbd0 and when it is unmapped, > does a subsequent map reuse this name right away? Yes. > > I ask this que

[ceph-users] Re: quincy v17.2.8 QE Validation status

2024-11-04 Thread Ilya Dryomov
On Sat, Nov 2, 2024 at 4:21 PM Yuri Weinstein wrote: > > Ilya, > > rbd rerunning > > https://github.com/ceph/ceph/pull/60586/ merged and cherry-picked into > quincy-release rbd and krbd approved based on additional reruns: https://pulpito.ceph.com/dis-2024-11-04_17:34:41-rbd-quincy-release-distr

[ceph-users] Re: quincy v17.2.8 QE Validation status

2024-11-01 Thread Ilya Dryomov
On Fri, Nov 1, 2024 at 4:22 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/68643#note-2 > Release Notes - TBD > > Seeking approvals/reviews for: > > rados - Laura, Radek, Travis, Ernesto, Adam King > > rgw - Casey > fs - Venky > orch -

[ceph-users] Re: KRBD: downside of setting alloc_size=4M for discard alignment?

2024-10-25 Thread Ilya Dryomov
On Fri, Oct 25, 2024 at 11:03 AM Friedrich Weber wrote: > > Hi, > > Some of our Proxmox VE users have noticed that a large fstrim inside a > QEMU/KVM guest does not free up as much space as expected on the backing > RBD image -- if the image is mapped on the host via KRBD and passed to > QEMU as a

[ceph-users] Re: squid 19.2.1 RC QE validation status

2024-12-29 Thread Ilya Dryomov
On Fri, Dec 27, 2024 at 5:31 PM Yuri Weinstein wrote: > > Hello and Happy Holidays all! > > We have merged several PRs (mostly in rgw and rbd areas) and I built a > new build 2 (rebase) > > https://tracker.ceph.com/issues/69234#note-1 > > Please provide trackers for failures so we avoid duplicates

[ceph-users] Re: squid 19.2.1 RC QE validation status

2024-12-18 Thread Ilya Dryomov
On Mon, Dec 16, 2024 at 6:27 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/69234#note-1 > > Release Notes - TBD > LRC upgrade - TBD > Gibba upgrade -TBD > > Please provide tracks for failures so we avoid duplicates. > Seeking approval

[ceph-users] Re: CRC Bad Signature when using KRBD

2024-12-13 Thread Ilya Dryomov
On Thu, Dec 12, 2024 at 5:37 PM Friedrich Weber wrote: > > Hi Ilya, > > some of our Proxmox VE users also report they need to enable rxbounce to > avoid their Windows VMs triggering these errors, see e.g. [1]. With > rxbounce, everything seems to work smoothly, so thanks for adding this > option.

[ceph-users] Re: reef 18.2.5 QE validation status

2025-03-26 Thread Ilya Dryomov
On Mon, Mar 24, 2025 at 10:40 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/70563#note-1 > Release Notes - TBD > LRC upgrade - TBD > > Seeking approvals/reviews for: > > smoke - Laura approved? > > rados - Radek, Laura approved? Travi

[ceph-users] Re: reef 18.2.7 hotfix QE validation status

2025-05-05 Thread Ilya Dryomov
On Fri, May 2, 2025 at 2:03 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/71166 > Release Notes - https://github.com/ceph/ceph/pull/63090 > LRC upgrade - N/A > > Seeking approvals/reviews for: > > smoke - same as in 18.2.5 > rados - L

[ceph-users] Re: rbd commands don't return to prompt

2025-04-25 Thread Ilya Dryomov
On Thu, Apr 24, 2025 at 11:29 PM Dominique Ramaekers wrote: > > Hi, > > Weird thing happened with my ceph. I've got nice nightly scripts (bash > scripts) for making backups, snapshots, cleaning up etc... Starting from my > last upgrade to ceph v19.2.2 my scripts hang during execution. The rbd ma

[ceph-users] Re: rbd commands don't return to prompt

2025-04-25 Thread Ilya Dryomov
On Fri, Apr 25, 2025 at 1:02 PM Dominique Ramaekers wrote: > > Hi Ilya, > > Thanks for the tip! Altough it seems less 'good practice' and I'm worried > about stability because ceph gives this output: > rbd: mapping succeeded but /dev/rbd0 is not accessible, is host /dev mounted? > In some cases u

[ceph-users] Re: ceph rbd mirror snapshot-based does not work if mirror-mode is pool and not image

2025-06-19 Thread Ilya Dryomov
On Thu, Jun 19, 2025 at 9:54 AM Julien Laurenceau wrote: > > Thanks Ilya, > > Yeah maybe it should be made more explicit in the doc, but from the user > point of view it's odd to have this limitation. > > Regarding the kubernetes-csi-addons that enable to do VolumeReplication , > promote, demote

[ceph-users] Re: *** Spam *** Re: v18.2.7 Reef released

2025-06-16 Thread Ilya Dryomov
On Tue, Jun 3, 2025 at 7:08 AM Sake Ceph wrote: > > This is also the case with us for Cephfs clients. We use the kernel mount, > but still you need to install ceph-common. Which isn't possible with the > latest version of Reef and thus those are stuck on 18.2.4. > Or am I doing something wrong?

[ceph-users] Re: ceph rbd mirror snapshot-based does not work if mirror-mode is pool and not image

2025-06-18 Thread Ilya Dryomov
On Wed, Jun 18, 2025 at 7:29 PM Julien Laurenceau wrote: > > Hi, > > I have 2 running ceph squid clusters (19.2.2). > On each cluster there is an rbd pool named k8s-1 that I want to mirror > using the snapshot based mode. > > if I configure both clusters in two way mirroring using the mode=image

[ceph-users] Re: squid 19.2.3 QE validation status

2025-07-03 Thread Ilya Dryomov
On Thu, Jul 3, 2025 at 4:56 PM Yuri Weinstein wrote: > > Corrected the subject line > > On Thu, Jul 3, 2025 at 7:36 AM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/71912#note-1 > > > > Release Notes - TBD > > LRC upgrade - TBD

[ceph-users] Re: Question about object maps and index rebuilding.

2025-07-03 Thread Ilya Dryomov
when the volume was not usable, but > nothing evident. Are you saying that "rbd lock ls" on the image immediately after powering on the hypervisor produces no output? Thanks, Ilya > > Cheers, > Gary > > > > On 2025-06-26 9:33 a.m., Ilya Dryomo

[ceph-users] Re: Question about object maps and index rebuilding.

2025-07-03 Thread Ilya Dryomov
On Wed, Jul 2, 2025 at 1:36 PM Gary Molenkamp wrote: > > I confirmed and can consistently replicate the failure event that forces > the object-map rebuild. > > If the VM is terminated cleanly, such as a hypervisor reboot, then the > VMs and their rbd volumes are all well. > If the hypervisor goes

[ceph-users] Re: squid 19.2.3 QE validation status

2025-07-03 Thread Ilya Dryomov
On Thu, Jul 3, 2025 at 6:37 PM Yuri Weinstein wrote: > > Hi Ilya > > Rerun scheduled rbd approved. Thanks, Ilya ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Question about object maps and index rebuilding.

2025-06-26 Thread Ilya Dryomov
On Tue, Jun 24, 2025 at 11:19 PM Gary Molenkamp wrote: > > We use ceph rbd as a volume service for both an Openstack deployment and > a series of Proxmox servers. This ceph deployment started as a Hammer > release and has been upgraded over the years to where it is now running > Quincy. It has be

[ceph-users] Re: Rocky8 (el8) client for squid 19.2.2

2025-07-22 Thread Ilya Dryomov
On Tue, Jul 22, 2025 at 4:54 PM Dan O'Brien wrote: > > Ilya Dryomov wrote: > > Have you tried loading the module wih "modprobe ceph"? > > > > Thanks, > > > > Ilya > > I had not! That did the trick, at least partly. The mod

[ceph-users] Re: Rocky8 (el8) client for squid 19.2.2

2025-07-22 Thread Ilya Dryomov
On Mon, Jul 21, 2025 at 10:03 PM Dan O'Brien wrote: > > Malte Stroem wrote: > > there is no need for ceph-common. > > > > You can mount the CephFS with the mount command because the Ceph kernel > > client is part of the kernel for a long time now. > > > > mount -t cephfs... > > > > just works. > >

[ceph-users] Re: tentacle 20.1.0 RC QE validation status

2025-08-01 Thread Ilya Dryomov
On Tue, Jul 29, 2025 at 10:25 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/72316#note-1 > > Release Notes - TBD > LRC upgrade - TBD > > Dev Leads, please review the list of suites for completeness, as this is a > new release. > > Se

<    1   2   3