[ceph-users] crusmap show wrong osd for PGs (EC-Pool)

2018-06-29 Thread ulembke
Hi all, I had an issue on an hammer-cluster (0.94.9 - ugraded from 0.94.7 today). There are three PGs incomplete: root@ceph-06:~# ceph health detail HEALTH_WARN 3 pgs incomplete; 3 pgs stuck inactive; 3 pgs stuck unclean pg 24.cc is stuck inactive for 595902.285007, current state incomplete, l

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-12 Thread ulembke
Hi, you can also set the primary_affinity to 0.5 at the 8TB-disks to lower the reading access (in this case you don't waste 50% of space). Udo Am 2018-04-12 04:36, schrieb ? ??: Hi,  For anybody who may be interested, here I share a process of locating the reason for ceph cluster performanc

Re: [ceph-users] how to improve performance

2017-11-21 Thread ulembke
Am 2017-11-21 13:12, schrieb Rudi Ahlers: On Tue, Nov 21, 2017 at 10:46 AM, Christian Balzer wrote: On Tue, 21 Nov 2017 09:21:58 +0200 Rudi Ahlers wrote: > On Mon, Nov 20, 2017 at 2:36 PM, Christian Balzer wrote: >... > > > Ok, so I have 4 physical servers and need to setup a highly redunda

Re: [ceph-users] how to improve performance

2017-11-20 Thread ulembke
Hi, not flushing the ceph-journal! I speak about the caching from linux. If you run free, you can see how much is cached: like # free totalusedfree shared buff/cache available Mem: 4118969216665960 4795700 12478019728032 28247464

Re: [ceph-users] how to improve performance

2017-11-20 Thread ulembke
Hi Rudi, Am 2017-11-20 11:58, schrieb Rudi Ahlers: ... Some more stats: root@virt2:~# rados bench -p Data 10 seq hints = 1 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0 0 0 0 0 0 - 0 1 16 402

[ceph-users] What's about release-note for 10.2.10?

2017-10-06 Thread ulembke
Hi, again is an update available without release-note... http://ceph.com/releases/v10-2-10-jewel-released/ isn't found. No announcement on the mailing list (perhaps i miss something). I know, normaly it's save to update ceph, but two releases ago it wasn't. Udo __

Re: [ceph-users] Stealth Jewel release?

2017-07-14 Thread ulembke
Hi, 10.2.9 is there: apt list --upgradable Listing... Done ceph/stable 10.2.9-1~bpo80+1 amd64 [upgradable from: 10.2.8-1~bpo80+1] Change-File?? Udo Am 2017-07-14 09:26, schrieb Martin Palma: So only the ceph-mds is affected? Let's say if we have mons and osds on 10.2.8 and the MDS on 10.2.6 or

Re: [ceph-users] ceph 'tech' question

2017-03-24 Thread ulembke
Hi, no ceph read from the primary PG - so your reads are app. 33% local. And why? better distibution of read-access. Udo Am 2017-03-24 09:49, schrieb mj: Hi all, Something that I am curious about: Suppose I have a three-server cluster, all with identical OSDs configuration, and also a replic

Re: [ceph-users] - permission denied on journal after reboot

2017-02-13 Thread ulembke
Hi Piotr, is your partition GUID right? Look with sgdisk: # sgdisk --info=2 /dev/sdd Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown) Partition unique GUID: 396A0C50-738C-449E-9FC6-B2D3A4469E51 First sector: 2048 (at 1024.0 KiB) Last sector: 10485760 (at 5.0 GiB) Partition size

Re: [ceph-users] Why would "osd marked itself down" will not recognised?

2017-01-13 Thread ulembke
Hi Greg, Am 2017-01-12 19:54, schrieb Gregory Farnum: ... That's not what anybody intended to have happen. It's possible the simultaneous loss of a monitor and the OSDs is triggering a case that's not behaving correctly. Can you create a ticket at tracker.ceph.com with your logs and what steps

Re: [ceph-users] Why would "osd marked itself down" will not recognised?

2017-01-12 Thread ulembke
Hi, Am 2017-01-12 11:38, schrieb Shinobu Kinjo: Sorry, I don't get your question. Generally speaking, the MON maintains maps of the cluster state: * Monitor map * OSD map * PG map * CRUSH map yes - and if an osd say "osd.5 marked itself down" the mon can update immediately the OSD map (an

[ceph-users] Why would "osd marked itself down" will not recognised?

2017-01-12 Thread ulembke
Hi all, I had just reboot all 3 nodes (one after one) of an small Proxmox-VE ceph-cluster. All nodes are mons and have two OSDs. During reboot of one node, ceph stucks longer than normaly and I look in the "ceph -w" output to find the reason. This is not the reason, but I'm wonder why "osd mar

Re: [ceph-users] Ceph - Health and Monitoring

2017-01-02 Thread ulembke
Hi Andre, I use check_ceph_dash on top of ceph-dash for this (is an nagios/icinga Plugin). https://github.com/Crapworks/ceph-dash https://github.com/Crapworks/check_ceph_dash ceph-dash provide an simple clear overview as web-dashbord. Udo Am 2017-01-02 12:42, schrieb Andre Forigato: Hello,

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-12-15 Thread ulembke
Hi Björn, i think he use something like this: http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server Udo Am 2016-12-15 11:10, schrieb Bjoern Laessig: On Do, 2016-10-27 at 15:47 +0200, mj wrote: Hi Jelle, On 10/27/2016 03:04 PM, Jelle de Jong wrote: > Hello everybody, > > I want to up

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-12 Thread ulembke
Hi, if you wrote from an client, the data was written in an (or more) Placement Group in 4MB-Chunks. This PGs are written to journal and the osd-disk and due this the data are in the linux file buffer on the osd-node too (until the os need the storage for other data (file buffer or anything el

Re: [ceph-users] OSDs cpu usage

2016-12-12 Thread ulembke
Hi, update to 10.2.5 - available since saturday. Udo Am 2016-12-12 13:40, schrieb George Kissandrakis: Hi I have a jewel/xenial ceph installation with 61 OSDs mixed sas/sata in hosts with two roots The installation has version jewel 10.2.3-1xenial (and monitors) Two hosts where newly a

Re: [ceph-users] Production System Evaluation / Problems

2016-11-29 Thread ulembke
Am 2016-11-28 10:29, schrieb Strankowski, Florian: Hey guys, ... I simply cant get osd.0 back up. I took it offline, out, reinserterd, resetup, deleted the osd configs, remade them, no success whatsoever. IMHO the documentation on this part is a bit "lousy" so im missing some points of informa

Re: [ceph-users] thanks for a double check on ceph's config

2016-05-10 Thread ulembke
Hi, Am 2016-05-10 05:48, schrieb Geocast: Hi members, We have 21 hosts for ceph OSD servers, each host has 12 SATA disks (4TB each), 64GB memory. ceph version 10.2.0, Ubuntu 16.04 LTS The whole cluster is new installed. Can you help check what the arguments we put in ceph.conf is reasonable o