[ceph-users] Dead node (watcher) won't timeout on RBD

2023-04-25 Thread max
ient / watcher from ceph (e.g. switching the mgr / mon) or to see why this is not timing out? I found some historical mails & issues (related to rook, which I don't use) regarding a param `osd_client_watch_timeout` but can't find how th

[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Max Krasilnikov
День добрий! Wed, Feb 03, 2021 at 09:29:52AM +, Magnus.Hagdorn wrote: > if a OSD becomes unavailble (broken disk, rebooting server) then all > I/O to the PGs stored on that OSD will block until replication level of > 2 is reached again. So, for a highly available cluster you need a > repli

[ceph-users] Re: Increasing QD=1 performance (lowering latency)

2021-02-12 Thread Max Krasilnikov
День добрий! Thu, Feb 11, 2021 at 04:00:31PM +0100, joachim.kraftmayer wrote: > Hi Wido, > > do you know what happened to mellanox's ceph rdma project of 2018? We tested ceph/rdma on Mellanox ConnectX-4 Lx during one year and saw no visible benefits. But it was strange connection outages bet

[ceph-users] Re: Suitable 10G Switches for ceph storage - any recommendations?

2021-05-19 Thread Max Vernimmen
and other issues. So I'm using mellanox cards instead, but broadcom should also work. hope it helps! best regards, Max On Wed, May 19, 2021 at 1:48 PM wrote: > -- Forwarded message -- > From: Hermann Himmelbauer > To: ceph-us...@ceph.com > Cc: > Bcc: >

[ceph-users] Re: Debian 12 (bookworm) / Reef 18.2.1 problems

2024-01-18 Thread Max Carrara
On 1/17/24 20:49, Chris Palmer wrote: > > > On 17/01/2024 16:11, kefu chai wrote: >> >> >> On Tue, Jan 16, 2024 at 12:11 AM Chris Palmer wrote: >> >>     Updates on both problems: >> >>     Problem 1 >>     -- >> >>     The bookworm/reef cephadm package needs updating to accommodate

[ceph-users] Dead node (watcher) won't timeout on RBD

2023-04-15 Thread Max Boone
f it is a bug, I'm happy to help figuring out it's root cause and see if I can help writing a fix. Cheers, Max. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph-dashboard python warning with new pyo3 0.17 lib (debian12)

2023-09-05 Thread Max Carrara
Hello there, could you perhaps provide some more information on how (or where) this got fixed? It doesn't seem to be fixed yet on the latest Ceph Quincy and Reef versions, but maybe I'm mistaken. I've provided some more context regarding this below, in case that helps. On Ceph Quincy 17.2.6 I'm

[ceph-users] Re: ceph-dashboard python warning with new pyo3 0.17 lib (debian12)

2023-10-11 Thread Max Carrara
On 9/5/23 16:53, Max Carrara wrote: > Hello there, > > could you perhaps provide some more information on how (or where) this > got fixed? It doesn't seem to be fixed yet on the latest Ceph Quincy > and Reef versions, but maybe I'm mistaken. I've provided some more

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-29 Thread Max Krasilnikov
Hello! Fri, May 29, 2020 at 09:58:58AM +0200, pr wrote: > Hans van den Bogert (hansbogert) writes: > > I would second that, there's no winning in this case for your requirements > > and single PSU nodes. If there were 3 feeds,  then yes; you could make an > > extra layer in your crushmap much l

[ceph-users] Re: Infiniband support

2020-08-27 Thread Max Krasilnikov
День добрий! Wed, Aug 26, 2020 at 10:08:57AM -0300, quaglio wrote: >Hi, > I could not see in the doc if Ceph has infiniband support. Is there >someone using it? > Also, is there any rdma support working natively? > > Can anyoune point me where to find more info

[ceph-users] Re: Erasure coding RBD pool for OpenStack

2020-08-28 Thread Max Krasilnikov
Hello! Fri, Aug 28, 2020 at 04:05:55PM +0700, mrxlazuardin wrote: > Hi Konstantin, > > I hope you or anybody still follows this old thread. > > Can this EC data pool be configured per pool, not per client? If we follow > https://docs.ceph.com/docs/master/rbd/rbd-openstack/ we may see that ci

[ceph-users] Re: Erasure coding RBD pool for OpenStack

2020-08-29 Thread Max Krasilnikov
Hello! Fri, Aug 28, 2020 at 09:18:05PM +0700, mrxlazuardin wrote: > Hi Max, > > Would you mind to share some config examples? What happen if we create the > instance which boot with newly created or existing volume? In cinder.conf: [ceph] volume_driver = cinder.volume.drivers.r

[ceph-users] Re: Erasure coding RBD pool for OpenStack

2020-08-29 Thread Max Krasilnikov
День добрий! Sat, Aug 29, 2020 at 10:19:12PM +0700, mrxlazuardin wrote: > Hi Max, > > I see, it is very helpful and inspired, thank you for that. I assume that > you use the same way for Nova ephemeral (nova user to vms pool). As for now i dont' use any non-cinder volumes i

[ceph-users] Re: Erasure coding RBD pool for OpenStack

2020-08-30 Thread Max Krasilnikov
Hello! Mon, Aug 31, 2020 at 01:06:13AM +0700, mrxlazuardin wrote: > Hi Max, > > As far as I know, cross access of Ceph pools is needed for copy on write > feature which enables fast cloning/snapshotting. For example, nova and > cinder users need to read to images pool to do cop

[ceph-users] Re: ceph's replicas question

2019-08-27 Thread Max Krasilnikov
Hello! Sat, Aug 24, 2019 at 10:47:55PM +0200, wido wrote: > > Op 24 aug. 2019 om 16:36 heeft Darren Soothill > > het volgende geschreven: > > > > So can you do it. > > > > Yes you can. > > > > Should you do it is the bigger question. > > > > So my first question would be what type of driv

[ceph-users] Re: RDMA

2019-10-15 Thread Max Krasilnikov
Hello! Mon, Oct 14, 2019 at 07:28:07AM -, gabryel.mason-williams wrote: > Hello, > > I was wondering what user experience was with using Ceph over RDMA? > - How you set it up? We had used RoCE Lag with Mellanox ConnectX-4 Lx. > - Documentation used to set it up? Generally, Mellano

[ceph-users] Re: RDMA

2019-10-15 Thread Max Krasilnikov
День добрий! Tue, Oct 15, 2019 at 02:29:58PM +0300, vitalif wrote: > Wow, does it really work? > > And why is it not supported by RBD? I hadn't dive into sources, but it stated in docs. > > Can you show us the latency graphs before and after and tell the I/O pattern > to which the latency

[ceph-users] Re: RMDA Bug?

2019-10-28 Thread Max Krasilnikov
День добрий! Sat, Oct 26, 2019 at 01:04:28AM +0800, changcheng.liu wrote: > What's your ceph version? Have you verified whether the problem could be > reproduced on master branch? As an option, it can be Jumbo Frames related bug. I had completely disabled JF in order to use RDMA over Ethernet

[ceph-users] Re: Restrict client access to a certain rbd pool with seperate metadata and data pool

2020-03-03 Thread Max Krasilnikov
Hello! IFAIK, you have to access replivated pool with default data pool pointing to ec pool like that: [client.user] rbd_default_data_pool = pool.ec Now you can access pool.rbd, but actual data will be placed on pool.ec. Maybe it is another way to specify default data pool for using EC+Replicat

[ceph-users] Re: [External Email] ceph ignoring cluster/public_network when initiating TCP connections

2020-03-23 Thread Max Krasilnikov
День добрий! Mon, Mar 23, 2020 at 05:21:37PM +1300, droopanu wrote: > Hi Dave, > > Thank you for the answer. > > Unfortunately the issue is that ceph uses the wrong source IP address, and > sends the traffic on the wrong interface anyway. > Would be good if ceph could actually set the source

[ceph-users] Changing the failure-domain of an erasure coded pool

2020-02-13 Thread Neukum, Max (ETP)
do this... what is the best way? For the record: our cluster is now (after the upgrade) ~40% full (400TB/1Pb) with 173 OSDs. Cheers, Max some more details: [root@ceph-node-a ~]# ceph osd lspools 1 ec42 2 cephfs_metadata [root@ceph-node-a ~]# ceph osd pool get ec42 erasure_code_profile era

[ceph-users] Re: Changing the failure-domain of an erasure coded pool

2020-02-13 Thread Neukum, Max (ETP)
This is good news! Thanks for the fast reply. We will now wait for Ceph to place all objects correctly and then check if we are happy with the setup. Cheers Max From: Paul Emmerich Sent: Thursday, February 13, 2020 2:54 PM To: Neukum, Max (ETP) Cc: ceph