ient / watcher from ceph (e.g.
switching the mgr / mon) or to see why this is not timing out?
I found some historical mails & issues (related to rook, which I don't use)
regarding a param `osd_client_watch_timeout` but can't find how th
День добрий!
Wed, Feb 03, 2021 at 09:29:52AM +, Magnus.Hagdorn wrote:
> if a OSD becomes unavailble (broken disk, rebooting server) then all
> I/O to the PGs stored on that OSD will block until replication level of
> 2 is reached again. So, for a highly available cluster you need a
> repli
День добрий!
Thu, Feb 11, 2021 at 04:00:31PM +0100, joachim.kraftmayer wrote:
> Hi Wido,
>
> do you know what happened to mellanox's ceph rdma project of 2018?
We tested ceph/rdma on Mellanox ConnectX-4 Lx during one year and saw no visible
benefits. But it was strange connection outages bet
and other issues.
So I'm using mellanox cards instead, but broadcom should also work.
hope it helps!
best regards,
Max
On Wed, May 19, 2021 at 1:48 PM wrote:
> -- Forwarded message --
> From: Hermann Himmelbauer
> To: ceph-us...@ceph.com
> Cc:
> Bcc:
>
On 1/17/24 20:49, Chris Palmer wrote:
>
>
> On 17/01/2024 16:11, kefu chai wrote:
>>
>>
>> On Tue, Jan 16, 2024 at 12:11 AM Chris Palmer wrote:
>>
>> Updates on both problems:
>>
>> Problem 1
>> --
>>
>> The bookworm/reef cephadm package needs updating to accommodate
f it is a bug, I'm happy to help figuring out it's root cause and see if I
can help writing a fix. Cheers, Max.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello there,
could you perhaps provide some more information on how (or where) this
got fixed? It doesn't seem to be fixed yet on the latest Ceph Quincy
and Reef versions, but maybe I'm mistaken. I've provided some more
context regarding this below, in case that helps.
On Ceph Quincy 17.2.6 I'm
On 9/5/23 16:53, Max Carrara wrote:
> Hello there,
>
> could you perhaps provide some more information on how (or where) this
> got fixed? It doesn't seem to be fixed yet on the latest Ceph Quincy
> and Reef versions, but maybe I'm mistaken. I've provided some more
Hello!
Fri, May 29, 2020 at 09:58:58AM +0200, pr wrote:
> Hans van den Bogert (hansbogert) writes:
> > I would second that, there's no winning in this case for your requirements
> > and single PSU nodes. If there were 3 feeds, then yes; you could make an
> > extra layer in your crushmap much l
День добрий!
Wed, Aug 26, 2020 at 10:08:57AM -0300, quaglio wrote:
>Hi,
> I could not see in the doc if Ceph has infiniband support. Is there
>someone using it?
> Also, is there any rdma support working natively?
>
> Can anyoune point me where to find more info
Hello!
Fri, Aug 28, 2020 at 04:05:55PM +0700, mrxlazuardin wrote:
> Hi Konstantin,
>
> I hope you or anybody still follows this old thread.
>
> Can this EC data pool be configured per pool, not per client? If we follow
> https://docs.ceph.com/docs/master/rbd/rbd-openstack/ we may see that ci
Hello!
Fri, Aug 28, 2020 at 09:18:05PM +0700, mrxlazuardin wrote:
> Hi Max,
>
> Would you mind to share some config examples? What happen if we create the
> instance which boot with newly created or existing volume?
In cinder.conf:
[ceph]
volume_driver = cinder.volume.drivers.r
День добрий!
Sat, Aug 29, 2020 at 10:19:12PM +0700, mrxlazuardin wrote:
> Hi Max,
>
> I see, it is very helpful and inspired, thank you for that. I assume that
> you use the same way for Nova ephemeral (nova user to vms pool).
As for now i dont' use any non-cinder volumes i
Hello!
Mon, Aug 31, 2020 at 01:06:13AM +0700, mrxlazuardin wrote:
> Hi Max,
>
> As far as I know, cross access of Ceph pools is needed for copy on write
> feature which enables fast cloning/snapshotting. For example, nova and
> cinder users need to read to images pool to do cop
Hello!
Sat, Aug 24, 2019 at 10:47:55PM +0200, wido wrote:
> > Op 24 aug. 2019 om 16:36 heeft Darren Soothill
> > het volgende geschreven:
> >
> > So can you do it.
> >
> > Yes you can.
> >
> > Should you do it is the bigger question.
> >
> > So my first question would be what type of driv
Hello!
Mon, Oct 14, 2019 at 07:28:07AM -, gabryel.mason-williams wrote:
> Hello,
>
> I was wondering what user experience was with using Ceph over RDMA?
> - How you set it up?
We had used RoCE Lag with Mellanox ConnectX-4 Lx.
> - Documentation used to set it up?
Generally, Mellano
День добрий!
Tue, Oct 15, 2019 at 02:29:58PM +0300, vitalif wrote:
> Wow, does it really work?
>
> And why is it not supported by RBD?
I hadn't dive into sources, but it stated in docs.
>
> Can you show us the latency graphs before and after and tell the I/O pattern
> to which the latency
День добрий!
Sat, Oct 26, 2019 at 01:04:28AM +0800, changcheng.liu wrote:
> What's your ceph version? Have you verified whether the problem could be
> reproduced on master branch?
As an option, it can be Jumbo Frames related bug. I had completely disabled JF
in order to use RDMA over Ethernet
Hello!
IFAIK, you have to access replivated pool with default data pool pointing to ec
pool like that:
[client.user]
rbd_default_data_pool = pool.ec
Now you can access pool.rbd, but actual data will be placed on pool.ec.
Maybe it is another way to specify default data pool for using EC+Replicat
День добрий!
Mon, Mar 23, 2020 at 05:21:37PM +1300, droopanu wrote:
> Hi Dave,
>
> Thank you for the answer.
>
> Unfortunately the issue is that ceph uses the wrong source IP address, and
> sends the traffic on the wrong interface anyway.
> Would be good if ceph could actually set the source
do this... what is the
best way? For the record: our cluster is now (after the upgrade) ~40% full
(400TB/1Pb) with 173 OSDs.
Cheers,
Max
some more details:
[root@ceph-node-a ~]# ceph osd lspools
1 ec42
2 cephfs_metadata
[root@ceph-node-a ~]# ceph osd pool get ec42 erasure_code_profile
era
This is good news! Thanks for the fast reply.
We will now wait for Ceph to place all objects correctly and then check if we
are happy with the setup.
Cheers
Max
From: Paul Emmerich
Sent: Thursday, February 13, 2020 2:54 PM
To: Neukum, Max (ETP)
Cc: ceph
22 matches
Mail list logo