Re: [ceph-users] Node reboot -- OSDs not "logging off" from cluster

2015-07-02 Thread Johannes Formann
Hi, > When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs > do not seem to shut down correctly. Clients hang and ceph osd tree show > the OSDs of that node still up. Repeated runs of ceph osd tree show > them going down after a while. For instance, here OSD.7 is still up, > even

[ceph-users] One OSD fails (slow requests, high cpu, termination)

2015-07-20 Thread Johannes Formann
Hi, I just noticed a strange behavior on one OSD (and only one, other OSDs on the same server didn’t show that behavior) in a ceph-cluster (all 0.94.2 on Debian 7 with a self-made 4.1 Kernel). The OSD started to accumulate slow requests, a restart didn’t help. After a few seconds the log is fil

Re: [ceph-users] One OSD fails (slow requests, high cpu, termination)

2015-07-20 Thread Johannes Formann
Hi, > I just noticed a strange behavior on one OSD (and only one, other OSDs on the > same server didn’t show that behavior) in a ceph-cluster (all 0.94.2 on > Debian 7 with a self-made 4.1 Kernel). > The OSD started to accumulate slow requests, a restart didn’t help. > > After a few seconds th

Re: [ceph-users] Ceph with SSD and HDD mixed

2015-07-20 Thread Johannes Formann
Hi, > Can someone give an insights, if it possible to mixed SSD with HDD? on the > OSD. you’ll have more or less four options: - SSDs for the journals of the OSD-process (SSD must be able to perform good on synchronous writes) - an SSD only pool for „high performance“ data - Using SSDs for the

Re: [ceph-users] Ceph with SSD and HDD mixed

2015-07-21 Thread Johannes Formann
> I am naive for this, no idea how to make a configurations or where I can > starts? based on the 4 options mentioned. > Hope you can expound it further if possible. > > Best regards, > Mario > > > > > > On Tue, Jul 21, 2015 at 2:44 PM, Johannes Formann w

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread Johannes Formann
Hello, what is the „size“ parameter of your pool? Some math do show the impact: size=3 means each write is written 6 times (3 copies, first journal, later disk). Calculating with 1.300MB/s „Client“ Bandwidth that means: 3 (size) * 1300 MB/s / 6 (SSD) => 650MB/s per SSD 3 (size) * 1300 MB/s / 30

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread Johannes Formann
t; > But my question is why speed is divided between clients? > And how much OSDnodes, OSDdaemos, PGs, I have to add/remove to ceph, > that each cephfs-client could write with his max network speed (10Gbit/s ~ > 1.2GB/s)??? > > > ____

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Johannes Formann
I agree. For the existing stable series the distribution support should be continued. But for new releases (infernalis, jewel...) I see no problem dropping the older versions of the distributions. greetings Johannes > Am 30.07.2015 um 16:39 schrieb Jon Meacham : > > If hammer and firefly bugf

Re: [ceph-users] Failure probability with largish deployments

2013-12-19 Thread Johannes Formann
Am 19.12.2013 um 20:39 schrieb Wolfgang Hennerbichler : > On 19 Dec 2013, at 16:43, Gruher, Joseph R wrote: > >> It seems like this calculation ignores that in a large Ceph cluster with >> triple replication having three drive failures doesn't automatically >> guarantee data loss (unlike a RAID

Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread Johannes Formann
Hi, > I’m having a (strange) issue with OSD bucket persistence / affinity on my > test cluster.. > > The cluster is PoC / test, by no means production. Consists of a single OSD > / MON host + another MON running on a KVM VM. > > Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be p