[ceph-users] Re: Spanning OSDs over two drives

2020-09-17 Thread Konstantin Shalygin
On 9/18/20 8:53 AM, Liam MacKenzie wrote: I have a scenario where I'm upgrading to ceph octopus on hardware that groups its drives in trays which contain 2 devices each. Previously these drives were joined in a software RAID1 and the md devices were used as the OSDs. The logic behind this

[ceph-users] Spanning OSDs over two drives

2020-09-17 Thread Liam MacKenzie
Hi all I have a scenario where I'm upgrading to ceph octopus on hardware that groups its drives in trays which contain 2 devices each. Previously these drives were joined in a software RAID1 and the md devices were used as the OSDs. The logic behind this is that should one of those drives fai

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-17 Thread Maged Mokhtar
On 17/09/2020 19:21, vita...@yourcmc.ru wrote: RBD in fact doesn't benefit much from the WAL/DB partition alone because Bluestore never does more writes per second than HDD can do on average (it flushes every 32 writes to the HDD). For RBD, the best thing is bcache. rbd will benefit: for

[ceph-users] Re: Introduce flash OSD's to Nautilus installation

2020-09-17 Thread Stefan Kooman
On 2020-09-17 18:36, Mathias Lindberg wrote: > Hi, > > We have a 1.2PB Nautilus installation primarily using CephFS for our > HPC-resources. > Our OSD’s have spinning disks and NvME devices for WAL and DB in an > LVM-setup. > > The CephFS metadata pool resides on spinning disks, and I wonder if >

[ceph-users] Re: multiple OSD crash, unfound objects

2020-09-17 Thread Michael Thomas
Hi Frank, Yes, it does sounds similar to your ticket. I've tried a few things to restore the failed files: * Locate a missing object with 'ceph pg $pgid list_unfound' * Convert the hex oid to a decimal inode number * Identify the affected file with 'find /ceph -inum $inode' At this point, I

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-17 Thread Mark Nelson
On 9/17/20 12:21 PM, vita...@yourcmc.ru wrote: It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for metadata in any setup that's used by more than 1 user :). RBD in fact doesn't benefit much from the WAL/DB partition alone because Bluestore never does more writes per

[ceph-users] Re: Nautilus Scrub and deep-Scrub execution order

2020-09-17 Thread Mike Dawson
On 9/15/2020 4:41 AM, Johannes L wrote: Robin H. Johnson wrote: On Mon, Sep 14, 2020 at 11:40:22AM -, Johannes L wrote: Hello Ceph-Users after upgrading one of our clusters to Nautilus we noticed the x pgs not scrubbed/deep-scrubbed in time warnings. Through some digging we found

[ceph-users] Re: Introduce flash OSD's to Nautilus installation

2020-09-17 Thread Dan van der Ster
... unless you use the reclassify tooling: https://docs.ceph.com/en/latest/rados/operations/crush-map-edits/#migrating-from-a-legacy-ssd-rule-to-device-classes On Thu, 17 Sep 2020, 19:30 Dan van der Ster, wrote: > Hi, > > AFAIR the device types adds a bunch of shadow devices e.g. osd.1~hdd or

[ceph-users] Re: Introduce flash OSD's to Nautilus installation

2020-09-17 Thread Dan van der Ster
Hi, AFAIR the device types adds a bunch of shadow devices e.g. osd.1~hdd or something like that... And those shadow devs have a different crush id than the original, untyped device. So, alas, I don't think your test is complete, and yes I expect that your data would move if you change the rule pr

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-17 Thread vitalif
It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for metadata in any setup that's used by more than 1 user :). RBD in fact doesn't benefit much from the WAL/DB partition alone because Bluestore never does more writes per second than HDD can do on average (it flushes ever

[ceph-users] Introduce flash OSD's to Nautilus installation

2020-09-17 Thread Mathias Lindberg
Hi, We have a 1.2PB Nautilus installation primarily using CephFS for our HPC-resources. Our OSD’s have spinning disks and NvME devices for WAL and DB in an LVM-setup. The CephFS metadata pool resides on spinning disks, and I wonder if there is any point from a performance perspective to put tha

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-17 Thread Mark Nelson
Does fio handle S3 objects spread across many buckets well? I think  bucket listing performance was maybe missing too, but It's been a while since I looked at fio's S3 support.  Maybe they have those use cases covered now.  I wrote a go based benchmark called hsbench based on the wasabi-tech be

[ceph-users] Re: rbd map on octopus from luminous client

2020-09-17 Thread Marc Boisis
it works Thanks > On 17 Sep 2020, at 15:17, Ilya Dryomov wrote: > > On Thu, Sep 17, 2020 at 1:56 PM Marc Boisis > wrote: >> >> >> Hi, >> >> I had to map a rbd from an ubuntu Trusty luminous client on an octopus >> cluster. >> >> client dmesg : >> feature set

[ceph-users] Re: rbd map on octopus from luminous client

2020-09-17 Thread Ilya Dryomov
On Thu, Sep 17, 2020 at 1:56 PM Marc Boisis wrote: > > > Hi, > > I had to map a rbd from an ubuntu Trusty luminous client on an octopus > cluster. > > client dmesg : > feature set mismatch, my 4a042a42 < server's 14a042a42, missing > 1 > > I downgrade my osd tunable to bobtail b

[ceph-users] Re: Disk consume for CephFS

2020-09-17 Thread fotofors
Yes, I know this option isn't safe, however, in my current situation, I can't increase it. I probably have some files under 4K, however, when I cleaned zero files I didn't saw any changes in statistics. My current `ceph df details` below: # ceph df detail --- RAW STORAGE --- CLASS SIZE AVA

[ceph-users] rbd map on octopus from luminous client

2020-09-17 Thread Marc Boisis
Hi, I had to map a rbd from an ubuntu Trusty luminous client on an octopus cluster. client dmesg : feature set mismatch, my 4a042a42 < server's 14a042a42, missing 1 I downgrade my osd tunable to bobtail but it still doesn't work ceph osd crush show-tunables { "choose_loca

[ceph-users] vfs_ceph for CentOS 8

2020-09-17 Thread Frank Schilder
Hi all, we are setting up a SAMBA share and would like to use the vfs_ceph module. Unfortunately, it seems not to be part of the common SAMBA packages on CentOS 8. Does anyone know how to install vfs_ceph? The SAMBA version on CentOS 8 is samba-4.11.2-13 and the documentation says the module is

[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-17 Thread Janne Johansson
Den tors 17 sep. 2020 kl 12:09 skrev Lenz Grimmer : > > https://bootstrap-datepicker.readthedocs.io/en/v1.9.0/ > > Support Read the Docs! > > > > Please help keep us sustainable by allowing our Ethical Ads in your ad > > blocker or go ad-free by subscribing. > > Thanks for the info! That prompted

[ceph-users] Misunderstanding in Facebook? Dial Facebook Customer Service Toll Free Number to address rep.

2020-09-17 Thread mary smith
Every so often there might be a mistake in the inbox and that may dishearten you to see the sends. Considering, you can take a stab at bracing the page. On the off chance that that doesn't work, by then you can find maintain from the client care pack by dialing the Facebook Customer Service Toll

[ceph-users] Problem in printhead causing Epson Error Code 0x97? Get to customer care for help.

2020-09-17 Thread mary smith
Sometimes the printhead issue can be a problem and result in Epson Error Code 0x97. Therefore, there can be problems from time to time and that will cause an issue. To deal with the error, you can take assistance from the tech videos or consultancies. In addition to that, you can also reach cust

[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-17 Thread Marc Roos
This[1] and natural evolution(?) [1] https://bootstrap-datepicker.readthedocs.io/en/v1.9.0/ Support Read the Docs! Please help keep us sustainable by allowing our Ethical Ads in your ad blocker or go ad-free by subscribing. Thank you! ❤️ -Original Message- From: Lenz Grimmer [mai

[ceph-users] Re: vfs_ceph for CentOS 8

2020-09-17 Thread Konstantin Shalygin
On 9/17/20 3:21 PM, Frank Schilder wrote: There is something I don't understand. Looking at what is not supported in CentOS 8 SAMBA, I wonder why vnf_ceph has been left out of the distro. Seems not to make sense. I could follow the instructions, compile the whole thing and copy the vfs_ceph

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-17 Thread George Shuklin
On 16/09/2020 07:26, Danni Setiawan wrote: Hi all, I'm trying to find performance penalty with OSD HDD when using WAL/DB in faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different workload (RBD, RGW with index bucket in SSD pool, and CephFS with metadata in SSD pool). I want to

[ceph-users] Re: vfs_ceph for CentOS 8

2020-09-17 Thread Konstantin Shalygin
On 9/17/20 2:44 PM, Frank Schilder wrote: we are setting up a SAMBA share and would like to use the vfs_ceph module. Unfortunately, it seems not to be part of the common SAMBA packages on CentOS 8. Does anyone know how to install vfs_ceph? The SAMBA version on CentOS 8 is samba-4.11.2-13 and t