HI,Cephers: I just use Mimic Ceph for Dashboard. I just do as http://docs.ceph.com/docs/mimic/mgr/dashboard/ When install a self-signed certificate as build-in commend , it stuck ERR like as bellow: [root@localhost ~]# ceph dashboard create-self-signed-certError EINV
HI,Cephers: I just use Mimic Ceph for Dashboard. I just do as http://docs.ceph.com/docs/mimic/mgr/dashboard/ When install a self-signed certificate as build-in commend , it stuck ERR like as bellow: [root@localhost ~]# ceph dashboard create-self-signed-certError EINVAL: Tra
Hello,
I was wondering if it's possible or how best to add a DB & WAL to an OSD
retroactively? (Still using Luminous)
I hurriedly created some HDD-backed bluestore OSDs without their WAL & DB
on SSDs, and then loaded them up with data.
When I realized I forgot to use the SSD partitions for its W
Oh, for some reason i thought you'd mentioned the OSD config earlier here.
Gald you figured it out anyway!
Are you doing any comparison benchmarks with/without compression? There is
precious little (no?) info out there about performance impact...
Cheers,
Blair
On 3 Jul. 2018 03:18, "David Turner
On Sat, 30 Jun 2018, 21:48 Nick Fisk, wrote:
> Hi Paul,
>
>
>
> Thanks for your response, is there anything you can go into more detail on
> and share with the list? I’m sure it would be much appreciated by more than
> just myself.
>
>
>
> I was planning on Kernel CephFS and NFS server, both seem
I got back around to testing this more today and I believe figured this out.
Originally I set compression_mode to aggressive for the pool. The OSDs
themselves, however, had their compression mode set to the default of
none. That means that while the pool was flagging the writes so that they
shou
On Sat, Jun 30, 2018 at 6:04 PM Frank de Bot (lists)
wrote:
> I've crossposted the problem to the freebsd-stable mailinglist. There is
> no ALUA support at the initiator side. There were 2 options for
> multipathing:
>
> 1. Export your LUNs via two (or more) different paths (for example
>via
On Mon, Jul 2, 2018 at 5:23 AM Wladimir Mutel wrote:
> Dear all,
>
> I am doing more experiments with Ceph iSCSI gateway and I am a bit
> confused on how to properly repurpose an RBD image from iSCSI target into
> QEMU virtual disk and back
>
> First, I create an RBD image and set it up as iSC
Hi Nick,
With iSCSI we reach over 150 MB/s vmotion for single vm, 1 GB/s for 7-8
vm migrations. Since these are 64KB block sizes, latency/iops is a large
factor, you need either controllers with write back cache or all flash .
hdds without write cache will suffer even with external wal/db on ssds
Hi,
we've used Kernel CephFS + Kernel NFS in the past. It works reasonably well
in many scenarios, especially for smaller setups.
However, you absolutely must use a recent kernel, we encountered a lot of
deadlocks and other random hangs and reconnect
failures with kernel 4.9 in larger setups under
Quoting Ilya Dryomov :
On Fri, Jun 29, 2018 at 8:08 PM Nick Fisk wrote:
This is for us peeps using Ceph with VMWare.
My current favoured solution for consuming Ceph in VMWare is via
RBD’s formatted with XFS and exported via NFS to ESXi. This seems
to perform better than iSCSI+VMFS whi
Hello Leonardo,
On Sat, Jun 30, 2018 at 11:09:31PM -0300, Leonardo Vaz wrote:
> Hey Cephers,
>
> The Ceph Community Newsletter of June 2018 has been published:
>
> https://ceph.com/newsletter/ceph-community-june-2018/
>
Thanks for the newsletter.
Sadly I didn't find any mentioning of the "Ce
Dear all,
I am doing more experiments with Ceph iSCSI gateway and I am a bit confused on
how to properly repurpose an RBD image from iSCSI target into QEMU virtual disk
and back
First, I create an RBD image and set it up as iSCSI backstore in gwcli,
specifying its size exactly to avoid unwa
On Fri, Jun 29, 2018 at 8:08 PM Nick Fisk wrote:
>
> This is for us peeps using Ceph with VMWare.
>
>
>
> My current favoured solution for consuming Ceph in VMWare is via RBD’s
> formatted with XFS and exported via NFS to ESXi. This seems to perform better
> than iSCSI+VMFS which seems to not pl
Hi,
After some testing, it looks like when the deep scrubbing is running, the
performance is greatly affected. Even when only 1 active deep scrubbing is
running.
After some googling, suggestion to enable the kernel CFQ scheduler.
https://ceph.com/geen-categorie/lowering-ceph-scrub-io-priority/
ce
Hi,
After some testing, it looks like when the deep scrubbing is running, the
performance is greatly affected. Even when only 1 active deep scrubbing is
running.
After some googling, suggestion to enable the kernel CFQ scheduler.
https://ceph.com/geen-categorie/lowering-ceph-scrub-io-priority/
ce
16 matches
Mail list logo