Hello,
We are planing to deploy our first Ceph cluster with 14 storage nodes and 3
monitor nodes. The storage node have 12 SATA disks and 4 SSDs. 2 of the
SSDs we plan to use as
journal disks and 2 for cache tiering.
Now the question raised in our team if it would be better to put all SSDs
lets s
Hello,
see the current "Blocked requests/ops?" thread in this ML, especially the
later parts.
And a number of similar threads.
In short, the CPU requirement for SSD based pools are significantly higher
than for HDD or HDD/SSD journal pools.
So having dedicated SSD nodes with less OSDs, faster C
The code has been backported and should be part of the firefly 0.80.10
release and the hammer 0.94.2 release.
Nathan
On 05/14/2015 07:30 AM, Yehuda Sadeh-Weinraub wrote:
The code is in wip-11620, abd it's currently on top of the next branch. We'll
get it through the tests, then get it into ha
Dear Eric:
Thanks for your information. The command 'reboot -fn' works well.
I have no idea that anybody has met 'umount stuck' condition like me. If it's
possible, I hope I could find the reason why the fail over process doesn't work
fine after 30 minutes.
WD
-Original Message-
Fro
Hi All,
I was noticing poor performance on my cluster and when I went to investigate
I noticed OSD 29 was flapping up and down. On investigation it looks like it
has 2 pending sectors, kernel log is filled with the following
end_request: critical medium error, dev sdk, sector 4483365656
en
Martin,
It all depends on your workload.
For example, if you are not bothered about write speed at all, I would say to
configure primary affinity of your cluster properly so that primary OSDs can be
the one hosted by SSDs..If you are considering 4 SSDs per node, so, total of 56
SSDs and 14 * 12
Hi,
I build ceph code from wip-newstore on RHEL7 and running performance tests
to compare with filestore. After few hours of running the tests the osd
daemons started to crash. Here is the stack trace, the osd crashes
immediately after the restart. So I could not get the osd up and running.
ceph
Hello,
On Sat, 30 May 2015 22:23:22 +0100 Nick Fisk wrote:
> Hi All,
>
>
>
> I was noticing poor performance on my cluster and when I went to
> investigate I noticed OSD 29 was flapping up and down. On investigation
> it looks like it has 2 pending sectors, kernel log is filled with the
> fo