Re: [ceph-users] Ceph Journal Disk Size

2015-07-03 Thread Van Leeuwen, Robert
> Another issue is performance : you'll get 4x more IOPS with 4 x 2TB drives > than with one single 8TB. > So if you have a performance target your money might be better spent on > smaller drives Regardless of the discussion if it is smart to have very large spinners: Be aware that some of the b

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread Van Leeuwen, Robert
> I'm wondering if anyone is using NVME SSDs for journals? > Intel 750 series 400GB NVME SSD offers good performance and price in > comparison to let say Intel S3700 400GB. > http://ark.intel.com/compare/71915,86740 > My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day for >

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Van Leeuwen, Robert
>We've hit issues (twice now) that seem (have not >figured out exactly how to confirm this yet) to be related to kernel >dentry slab cache exhaustion - symptoms were a major slow down in >performance and slow requests all over the place on writes, watching >OSD iostat would show a single drive hitt

Re: [ceph-users] New cluster - configuration tips and reccomendation - NVMe

2017-07-05 Thread Van Leeuwen, Robert
Hi Max, You might also want to look at the PCIE lanes. I am not an expert on the matter but my guess would be the 8 NVME drives + 2x100Gbit would be too much for the current Xeon generation (40 PCIE lanes) to fully utilize. I think the upcoming AMD/Intel offerings will improve that quite a bit s

Re: [ceph-users] ceph.conf tuning ... please comment

2017-12-06 Thread Van Leeuwen, Robert
Hi, Lets start with a disclaimer: Not an expert on any of these ceph tuning settings :) However, in general with cluster intervals/timings: You are trading quick failovers detection for: 1) Processing power: You might starve yourself of resources when expanding the cluster. If you multiply all

Re: [ceph-users] Linux Meltdown (KPTI) fix and how it affects performance?

2018-01-12 Thread Van Leeuwen, Robert
> Ceph runs on a dedicated hardware, there is nothing there except Ceph, >and the ceph daemons have already all power on ceph's data. >And there is no random-code execution allowed on this node. > >Thus, spectre & meltdown are meaning-less for Ceph's node, and >mitigations should

Re: [ceph-users] osds with different disk sizes may killing, > performance (?? ?)

2018-04-18 Thread Van Leeuwen, Robert
>> There is no way to fill up all disks evenly with the same number of >> Bytes and then stop filling the small disks when they're full and >> only continue filling the larger disks. >This is possible with adjusting crush weights. Initially the smaller >drives are weighted more highly than l

Re: [ceph-users] Please help me get rid of Slow / blocked requests

2018-05-01 Thread Van Leeuwen, Robert
> On 5/1/18, 12:02 PM, "ceph-users on behalf of Shantur Rathore" > > wrote: >I am not sure if the benchmark is overloading the cluster as 3 out of > 5 runs the benchmark goes around 37K IOPS and suddenly for the >problematic runs it drops to 0 IOPS for a couple of minutes and then >

Re: [ceph-users] use object size of 32k rather than 4M

2015-12-23 Thread Van Leeuwen, Robert
>In order to reduce the enlarge impact, we want to change the default size of >the object from 4M to 32k. > >We know that will increase the number of the objects of one OSD and make >remove process become longer. > >Hmm, here i want to ask your guys is there any other potential problems will >3

Re: [ceph-users] use object size of 32k rather than 4M

2015-12-23 Thread Van Leeuwen, Robert
>Thanks for your quick reply. Yeah, the number of file really will be the >potential problem. But if just the memory problem, we could use more memory in >our OSD >servers. Add more mem might not be a viable solution: Ceph does not say how much data is stores in an inode but the docs say the xa

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-16 Thread Van Leeuwen, Robert
>Indeed, well understood. > >As a shorter term workaround, if you have control over the VMs, you could >always just slice out an LVM volume from local SSD/NVMe and pass it through to >the guest. Within the guest, use dm-cache (or similar) to add a cache >front-end to your RBD volume. If you

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-16 Thread Van Leeuwen, Robert
> >My understanding of how a writeback cache should work is that it should only >take a few seconds for writes to be streamed onto the network and is focussed >on resolving the speed issue of small sync writes. The writes would be bundled >into larger writes that are not time sensitive. > >So th

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-29 Thread Van Leeuwen, Robert
On 3/27/16, 9:59 AM, "Ric Wheeler" wrote: >On 03/16/2016 12:15 PM, Van Leeuwen, Robert wrote: >>> My understanding of how a writeback cache should work is that it should >>> only take a few seconds for writes to be streamed onto the network and is >>&g

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-29 Thread Van Leeuwen, Robert
>>> If you try to look at the rbd device under dm-cache from another host, of >>> course >>> any data that was cached on the dm-cache layer will be missing since the >>> dm-cache device itself is local to the host you wrote the data from >>> originally. >> And here it can (and probably will) go