День добрий!
Wed, Feb 03, 2021 at 09:29:52AM +, Magnus.Hagdorn wrote:
> if a OSD becomes unavailble (broken disk, rebooting server) then all
> I/O to the PGs stored on that OSD will block until replication level of
> 2 is reached again. So, for a highly available cluster you need a
> repli
День добрий!
Thu, Feb 11, 2021 at 04:00:31PM +0100, joachim.kraftmayer wrote:
> Hi Wido,
>
> do you know what happened to mellanox's ceph rdma project of 2018?
We tested ceph/rdma on Mellanox ConnectX-4 Lx during one year and saw no visible
benefits. But it was strange connection outages bet
Hello!
Fri, May 29, 2020 at 09:58:58AM +0200, pr wrote:
> Hans van den Bogert (hansbogert) writes:
> > I would second that, there's no winning in this case for your requirements
> > and single PSU nodes. If there were 3 feeds, then yes; you could make an
> > extra layer in your crushmap much l
День добрий!
Wed, Aug 26, 2020 at 10:08:57AM -0300, quaglio wrote:
>Hi,
> I could not see in the doc if Ceph has infiniband support. Is there
>someone using it?
> Also, is there any rdma support working natively?
>
> Can anyoune point me where to find more info
Hello!
Fri, Aug 28, 2020 at 04:05:55PM +0700, mrxlazuardin wrote:
> Hi Konstantin,
>
> I hope you or anybody still follows this old thread.
>
> Can this EC data pool be configured per pool, not per client? If we follow
> https://docs.ceph.com/docs/master/rbd/rbd-openstack/ we may see that ci
e'.
Metadata pool is quite small, it is 1.8 MiB used while data pool is 279 GiB
used. Your particular sizes may differ, but not too much.
> Best regards,
>
>
> On Fri, Aug 28, 2020 at 5:27 PM Max Krasilnikov
> wrote:
>
> > Hello!
> >
> > Fri, Aug 28, 2
accessing data, databases and files directly.
Any of them may be deployed as standalone. The only glue for them is Keystone.
> On Sat, Aug 29, 2020 at 2:21 PM Max Krasilnikov
> wrote:
>
> > Hello!
> >
> > Fri, Aug 28, 2020 at 09:18:05PM +0700, mrxlazuardin wrote:
>
e you
> are right that cross writing is API based. What do you think?
I've use image volume cache as volumes from images is commonly used:
https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html
Cinder-backup, AFAIR, uses snapshots too.
> On Sun, Aug 30,
Hello!
Sat, Aug 24, 2019 at 10:47:55PM +0200, wido wrote:
> > Op 24 aug. 2019 om 16:36 heeft Darren Soothill
> > het volgende geschreven:
> >
> > So can you do it.
> >
> > Yes you can.
> >
> > Should you do it is the bigger question.
> >
> > So my first question would be what type of driv
Hello!
Mon, Oct 14, 2019 at 07:28:07AM -, gabryel.mason-williams wrote:
> Hello,
>
> I was wondering what user experience was with using Ceph over RDMA?
> - How you set it up?
We had used RoCE Lag with Mellanox ConnectX-4 Lx.
> - Documentation used to set it up?
Generally, Mellano
День добрий!
Tue, Oct 15, 2019 at 02:29:58PM +0300, vitalif wrote:
> Wow, does it really work?
>
> And why is it not supported by RBD?
I hadn't dive into sources, but it stated in docs.
>
> Can you show us the latency graphs before and after and tell the I/O pattern
> to which the latency
День добрий!
Sat, Oct 26, 2019 at 01:04:28AM +0800, changcheng.liu wrote:
> What's your ceph version? Have you verified whether the problem could be
> reproduced on master branch?
As an option, it can be Jumbo Frames related bug. I had completely disabled JF
in order to use RDMA over Ethernet
Hello!
IFAIK, you have to access replivated pool with default data pool pointing to ec
pool like that:
[client.user]
rbd_default_data_pool = pool.ec
Now you can access pool.rbd, but actual data will be placed on pool.ec.
Maybe it is another way to specify default data pool for using EC+Replicat
День добрий!
Mon, Mar 23, 2020 at 05:21:37PM +1300, droopanu wrote:
> Hi Dave,
>
> Thank you for the answer.
>
> Unfortunately the issue is that ceph uses the wrong source IP address, and
> sends the traffic on the wrong interface anyway.
> Would be good if ceph could actually set the source
14 matches
Mail list logo