Re: [ceph-users] [Jewel] Crash Osd with void Hit_set_trim

2017-10-23 Thread pascal.pu...@pci-conseil.net
Hello, Le 24/10/2017 à 07:49, Brad Hubbard a écrit : On Mon, Oct 23, 2017 at 4:51 PM, pascal.pu...@pci-conseil.net <mailto:pascal.pu...@pci-conseil.net> <mailto:pascal.pu...@pci-conseil.net>> wrote: Hello, Le 23/10/2017 à 02:05, Brad Hubbard a écrit : 2

Re: [ceph-users] [Jewel] Crash Osd with void Hit_set_trim

2017-10-22 Thread pascal.pu...@pci-conseil.net
ing remove of cache tier... a lot of thanks to http://fnordahl.com/2017/04/17/ceph-rbd-volume-header-recovery/, to recreate it and resurrect RBD disk :) Le 19/10/2017 à 00:19, Brad Hubbard a écrit : On Wed, Oct 18, 2017 at 11:16 PM, pascal.pu...@pci-conseil.net wrote: hello, For 2 we

[ceph-users] [Jewel] Crash Osd with void Hit_set_trim

2017-10-18 Thread pascal.pu...@pci-conseil.net
hello, For 2 week, I lost sometime some OSD : Here trace :     0> 2017-10-18 05:16:40.873511 7f7c1e497700 -1 osd/ReplicatedPG.cc: In function '*void ReplicatedPG::hit_set_trim(*ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7f7c1e497700 time 2017-10-18 05:16:40.869962 osd/ReplicatedPG.c

Re: [ceph-users] Ceph extension - how to equilibrate ?

2017-04-19 Thread pascal.pu...@pci-conseil.net
[...] I hope those aren't SMR disks... make sure they're not or it will be very slow, to the point where *osds will time out and die*. Hopefully : DELL 8TB 7.2K RPM NLSAS 12Gbps 512e 3.5in Hot-plug Hard Drive,sync :) This is not for performance, just for cold data. ceph osd crush move osd.X h

[ceph-users] Ceph extension - how to equilibrate ?

2017-04-18 Thread pascal.pu...@pci-conseil.net
Hello, Just an advise : next time, I will extend my Jewel ceph cluster with a fourth node. Actually, we have 3 x nodes of 12 x OSD with 4TB DD (36 x DD 4TB). I will add a new node with 12 x 8TB DD (will add 12 new OSD => 48 OSD). So, how to simply equilibrate ? How to just unplug 3 x DD 4TB

Re: [ceph-users] [Jewel] upgrade 10.2.3 => 10.2.5 KO : first OSD server freeze every two days :)

2017-03-10 Thread pascal.pu...@pci-conseil.net
ood before upgrade, but not after... That explain a lot, why load is ever au little greater as other... etc... etc... We will see in two days. Sorry, sorry, sorry :| Le 09/03/2017 à 13:45, pascal.pu...@pci-conseil.net a écrit : Le 09/03/2017 à 13:03, Vincent Godin a écrit : First of all,

Re: [ceph-users] [Jewel] upgrade 10.2.3 => 10.2.5 KO : first OSD server freeze every two days :)

2017-03-09 Thread pascal.pu...@pci-conseil.net
Le 09/03/2017 à 13:03, Vincent Godin a écrit : First of all, don't do a ceph upgrade while your cluster is in warning or error state. A process upgrade must be done from an clean cluster. of course. So, Yesterday, so I try this for my "unfound PG" ceph pg 50.2dd mark_unfound_lost revert => MON

Re: [ceph-users] [Jewel] upgrade 10.2.3 => 10.2.5 KO : first OSD server freeze every two days :)

2017-03-08 Thread pascal.pu...@pci-conseil.net
o 10.2.5 from 10.2.3). Thanks for your help. Regards, Le 02/03/2017 à 15:34, pascal.pu...@pci-conseil.net a écrit : Hello, So, I need maybe some advices : 1 week ago (last 19 feb), I upgraded my stable Ceph Jewel from 10.2.3 to 10.2.5 (YES, It was maybe a bad idea). I never had problem wit

Re: [ceph-users] [Jewel] upgrade 10.2.3 => 10.2.5 KO : first OSD server freeze every two days :)

2017-03-02 Thread pascal.pu...@pci-conseil.net
Erratum : Sorry for bad link for screenshots : 1st : https://supervision.pci-conseil.net/screenshot_LOAD.png 2nd : https://supervision.pci-conseil.net/screenshot_OSD_IO.png :) Le 02/03/2017 à 15:34, pascal.pu...@pci-conseil.net a écrit : Hello, So, I need maybe some advices : 1 week ago

[ceph-users] [Jewel] upgrade 10.2.3 => 10.2.5 KO : first OSD server freeze every two days :)

2017-03-02 Thread pascal.pu...@pci-conseil.net
ad max bytes = 524288 # maximum size of a readahead request, in bytes. rbd readahead disable after bytes = 52428800 -- *Performance Conseil Informatique* Pascal Pucci Consultant Infrastructure pascal.pu...@pci-conseil.net <mailto:pascal.pu...@pci-conseil.net> Mobile : 06 51 47 84 98 B