> Another issue is performance : you'll get 4x more IOPS with 4 x 2TB drives
> than with one single 8TB.
> So if you have a performance target your money might be better spent on
> smaller drives
Regardless of the discussion if it is smart to have very large spinners:
Be aware that some of the b
> I'm wondering if anyone is using NVME SSDs for journals?
> Intel 750 series 400GB NVME SSD offers good performance and price in
> comparison to let say Intel S3700 400GB.
> http://ark.intel.com/compare/71915,86740
> My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day for
>
>We've hit issues (twice now) that seem (have not
>figured out exactly how to confirm this yet) to be related to kernel
>dentry slab cache exhaustion - symptoms were a major slow down in
>performance and slow requests all over the place on writes, watching
>OSD iostat would show a single drive hitt
Hi Max,
You might also want to look at the PCIE lanes.
I am not an expert on the matter but my guess would be the 8 NVME drives +
2x100Gbit would be too much for
the current Xeon generation (40 PCIE lanes) to fully utilize.
I think the upcoming AMD/Intel offerings will improve that quite a bit s
Hi,
Lets start with a disclaimer: Not an expert on any of these ceph tuning
settings :)
However, in general with cluster intervals/timings:
You are trading quick failovers detection for:
1) Processing power:
You might starve yourself of resources when expanding the cluster.
If you multiply all
> Ceph runs on a dedicated hardware, there is nothing there except Ceph,
>and the ceph daemons have already all power on ceph's data.
>And there is no random-code execution allowed on this node.
>
>Thus, spectre & meltdown are meaning-less for Ceph's node, and
>mitigations should
>> There is no way to fill up all disks evenly with the same number of
>> Bytes and then stop filling the small disks when they're full and
>> only continue filling the larger disks.
>This is possible with adjusting crush weights. Initially the smaller
>drives are weighted more highly than l
> On 5/1/18, 12:02 PM, "ceph-users on behalf of Shantur Rathore"
>
> wrote:
>I am not sure if the benchmark is overloading the cluster as 3 out of
> 5 runs the benchmark goes around 37K IOPS and suddenly for the
>problematic runs it drops to 0 IOPS for a couple of minutes and then
>
>In order to reduce the enlarge impact, we want to change the default size of
>the object from 4M to 32k.
>
>We know that will increase the number of the objects of one OSD and make
>remove process become longer.
>
>Hmm, here i want to ask your guys is there any other potential problems will
>3
>Thanks for your quick reply. Yeah, the number of file really will be the
>potential problem. But if just the memory problem, we could use more memory in
>our OSD
>servers.
Add more mem might not be a viable solution:
Ceph does not say how much data is stores in an inode but the docs say the
xa
>Indeed, well understood.
>
>As a shorter term workaround, if you have control over the VMs, you could
>always just slice out an LVM volume from local SSD/NVMe and pass it through to
>the guest. Within the guest, use dm-cache (or similar) to add a cache
>front-end to your RBD volume.
If you
>
>My understanding of how a writeback cache should work is that it should only
>take a few seconds for writes to be streamed onto the network and is focussed
>on resolving the speed issue of small sync writes. The writes would be bundled
>into larger writes that are not time sensitive.
>
>So th
On 3/27/16, 9:59 AM, "Ric Wheeler" wrote:
>On 03/16/2016 12:15 PM, Van Leeuwen, Robert wrote:
>>> My understanding of how a writeback cache should work is that it should
>>> only take a few seconds for writes to be streamed onto the network and is
>>&g
>>> If you try to look at the rbd device under dm-cache from another host, of
>>> course
>>> any data that was cached on the dm-cache layer will be missing since the
>>> dm-cache device itself is local to the host you wrote the data from
>>> originally.
>> And here it can (and probably will) go
14 matches
Mail list logo