[ceph-users] commend "ceph dashboard create-self-signed-cert " ERR

2018-07-02 Thread jaywaychou
HI,Cephers:    I just use Mimic  Ceph  for Dashboard. I just do as http://docs.ceph.com/docs/mimic/mgr/dashboard/ When   install a self-signed certificate as build-in commend , it stuck ERR like as bellow:    [root@localhost ~]# ceph dashboard create-self-signed-certError EINV

[ceph-users] commend 【ceph dashboard create-self-signed-cert】 ERR

2018-07-02 Thread jaywaychou
HI,Cephers:    I just use Mimic  Ceph  for Dashboard. I just do as http://docs.ceph.com/docs/mimic/mgr/dashboard/ When   install a self-signed certificate as build-in commend , it stuck ERR like as bellow:    [root@localhost ~]# ceph dashboard create-self-signed-certError EINVAL: Tra

[ceph-users] Adding SSD-backed DB & WAL to existing HDD OSD

2018-07-02 Thread Brad Fitzpatrick
Hello, I was wondering if it's possible or how best to add a DB & WAL to an OSD retroactively? (Still using Luminous) I hurriedly created some HDD-backed bluestore OSDs without their WAL & DB on SSDs, and then loaded them up with data. When I realized I forgot to use the SSD partitions for its W

Re: [ceph-users] Monitoring bluestore compression ratio

2018-07-02 Thread Blair Bethwaite
Oh, for some reason i thought you'd mentioned the OSD config earlier here. Gald you figured it out anyway! Are you doing any comparison benchmarks with/without compression? There is precious little (no?) info out there about performance impact... Cheers, Blair On 3 Jul. 2018 03:18, "David Turner

Re: [ceph-users] CephFS+NFS For VMWare

2018-07-02 Thread David C
On Sat, 30 Jun 2018, 21:48 Nick Fisk, wrote: > Hi Paul, > > > > Thanks for your response, is there anything you can go into more detail on > and share with the list? I’m sure it would be much appreciated by more than > just myself. > > > > I was planning on Kernel CephFS and NFS server, both seem

Re: [ceph-users] Monitoring bluestore compression ratio

2018-07-02 Thread David Turner
I got back around to testing this more today and I believe figured this out. Originally I set compression_mode to aggressive for the pool. The OSDs themselves, however, had their compression mode set to the default of none. That means that while the pool was flagging the writes so that they shou

Re: [ceph-users] FreeBSD Initiator with Ceph iscsi

2018-07-02 Thread Jason Dillaman
On Sat, Jun 30, 2018 at 6:04 PM Frank de Bot (lists) wrote: > I've crossposted the problem to the freebsd-stable mailinglist. There is > no ALUA support at the initiator side. There were 2 options for > multipathing: > > 1. Export your LUNs via two (or more) different paths (for example >via

Re: [ceph-users] RBD image repurpose between iSCSI and QEMU VM, how to do properly ?

2018-07-02 Thread Jason Dillaman
On Mon, Jul 2, 2018 at 5:23 AM Wladimir Mutel wrote: > Dear all, > > I am doing more experiments with Ceph iSCSI gateway and I am a bit > confused on how to properly repurpose an RBD image from iSCSI target into > QEMU virtual disk and back > > First, I create an RBD image and set it up as iSC

Re: [ceph-users] CephFS+NFS For VMWare

2018-07-02 Thread Maged Mokhtar
Hi Nick, With iSCSI we reach over 150 MB/s vmotion for single vm, 1 GB/s for 7-8 vm migrations. Since these are 64KB block sizes, latency/iops is a large factor, you need either controllers with write back cache or all flash . hdds without write cache will suffer even with external wal/db on ssds

Re: [ceph-users] CephFS+NFS For VMWare

2018-07-02 Thread Paul Emmerich
Hi, we've used Kernel CephFS + Kernel NFS in the past. It works reasonably well in many scenarios, especially for smaller setups. However, you absolutely must use a recent kernel, we encountered a lot of deadlocks and other random hangs and reconnect failures with kernel 4.9 in larger setups under

Re: [ceph-users] CephFS+NFS For VMWare

2018-07-02 Thread Nick Fisk
Quoting Ilya Dryomov : On Fri, Jun 29, 2018 at 8:08 PM Nick Fisk wrote: This is for us peeps using Ceph with VMWare. My current favoured solution for consuming Ceph in VMWare is via RBD’s formatted with XFS and exported via NFS to ESXi. This seems to perform better than iSCSI+VMFS whi

Re: [ceph-users] Ceph Community Newsletter (June 2018)

2018-07-02 Thread Alwin Antreich
Hello Leonardo, On Sat, Jun 30, 2018 at 11:09:31PM -0300, Leonardo Vaz wrote: > Hey Cephers, > > The Ceph Community Newsletter of June 2018 has been published: > > https://ceph.com/newsletter/ceph-community-june-2018/ > Thanks for the newsletter. Sadly I didn't find any mentioning of the "Ce

[ceph-users] RBD image repurpose between iSCSI and QEMU VM, how to do properly ?

2018-07-02 Thread Wladimir Mutel
Dear all, I am doing more experiments with Ceph iSCSI gateway and I am a bit confused on how to properly repurpose an RBD image from iSCSI target into QEMU virtual disk and back First, I create an RBD image and set it up as iSCSI backstore in gwcli, specifying its size exactly to avoid unwa

Re: [ceph-users] CephFS+NFS For VMWare

2018-07-02 Thread Ilya Dryomov
On Fri, Jun 29, 2018 at 8:08 PM Nick Fisk wrote: > > This is for us peeps using Ceph with VMWare. > > > > My current favoured solution for consuming Ceph in VMWare is via RBD’s > formatted with XFS and exported via NFS to ESXi. This seems to perform better > than iSCSI+VMFS which seems to not pl

Re: [ceph-users] [Ceph-users] Ceph getting slow requests and rw locks

2018-07-02 Thread Phang WM
Hi, After some testing, it looks like when the deep scrubbing is running, the performance is greatly affected. Even when only 1 active deep scrubbing is running. After some googling, suggestion to enable the kernel CFQ scheduler. https://ceph.com/geen-categorie/lowering-ceph-scrub-io-priority/ ce

Re: [ceph-users] [Ceph-community] Ceph getting slow requests and rw locks

2018-07-02 Thread Phang WM
Hi, After some testing, it looks like when the deep scrubbing is running, the performance is greatly affected. Even when only 1 active deep scrubbing is running. After some googling, suggestion to enable the kernel CFQ scheduler. https://ceph.com/geen-categorie/lowering-ceph-scrub-io-priority/ ce