Hi,
> When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
> do not seem to shut down correctly. Clients hang and ceph osd tree show
> the OSDs of that node still up. Repeated runs of ceph osd tree show
> them going down after a while. For instance, here OSD.7 is still up,
> even
Hi,
I just noticed a strange behavior on one OSD (and only one, other OSDs on the
same server didn’t show that behavior) in a ceph-cluster (all 0.94.2 on Debian
7 with a self-made 4.1 Kernel).
The OSD started to accumulate slow requests, a restart didn’t help.
After a few seconds the log is fil
Hi,
> I just noticed a strange behavior on one OSD (and only one, other OSDs on the
> same server didn’t show that behavior) in a ceph-cluster (all 0.94.2 on
> Debian 7 with a self-made 4.1 Kernel).
> The OSD started to accumulate slow requests, a restart didn’t help.
>
> After a few seconds th
Hi,
> Can someone give an insights, if it possible to mixed SSD with HDD? on the
> OSD.
you’ll have more or less four options:
- SSDs for the journals of the OSD-process (SSD must be able to perform good on
synchronous writes)
- an SSD only pool for „high performance“ data
- Using SSDs for the
> I am naive for this, no idea how to make a configurations or where I can
> starts? based on the 4 options mentioned.
> Hope you can expound it further if possible.
>
> Best regards,
> Mario
>
>
>
>
>
> On Tue, Jul 21, 2015 at 2:44 PM, Johannes Formann w
Hello,
what is the „size“ parameter of your pool?
Some math do show the impact:
size=3 means each write is written 6 times (3 copies, first journal, later
disk). Calculating with 1.300MB/s „Client“ Bandwidth that means:
3 (size) * 1300 MB/s / 6 (SSD) => 650MB/s per SSD
3 (size) * 1300 MB/s / 30
t;
> But my question is why speed is divided between clients?
> And how much OSDnodes, OSDdaemos, PGs, I have to add/remove to ceph,
> that each cephfs-client could write with his max network speed (10Gbit/s ~
> 1.2GB/s)???
>
>
> ____
I agree. For the existing stable series the distribution support should be
continued.
But for new releases (infernalis, jewel...) I see no problem dropping the older
versions of the distributions.
greetings
Johannes
> Am 30.07.2015 um 16:39 schrieb Jon Meacham :
>
> If hammer and firefly bugf
Am 19.12.2013 um 20:39 schrieb Wolfgang Hennerbichler :
> On 19 Dec 2013, at 16:43, Gruher, Joseph R wrote:
>
>> It seems like this calculation ignores that in a large Ceph cluster with
>> triple replication having three drive failures doesn't automatically
>> guarantee data loss (unlike a RAID
Hi,
> I’m having a (strange) issue with OSD bucket persistence / affinity on my
> test cluster..
>
> The cluster is PoC / test, by no means production. Consists of a single OSD
> / MON host + another MON running on a KVM VM.
>
> Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be p
10 matches
Mail list logo