[ceph-users] Still risky to remove RBD-Images?

2018-08-20 Thread Mehmet
es. Would be nice if you could enlightne me :) - Mehmet ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Add Partitions to Ceph Cluster

2018-07-23 Thread Mehmet
Hi Dimitri, what is the output of - ceph osd tree? Perhaps you have a initials crush weight of 0 and in this case there wouldnt be any change in the PGs till you change the weight. - Mehmet Am 2018-07-10 11:58, schrieb Dimitri Roschkowski: Hi, is it possible to use just a partition

[ceph-users] Safe to use rados -p rbd cleanup?

2018-07-15 Thread Mehmet
hello guys, in my production cluster i've many objects like this "#> rados -p rbd ls | grep 'benchmark'" ... .. . benchmark_data_inkscope.example.net_32654_object1918 benchmark_data_server_26414_object1990 ... .. . Is it safe to run "rados -p rbd cleanup" or is there any risk for my images? _

[ceph-users] Move Ceph-Cluster to another Datacenter

2018-06-25 Thread Mehmet
Hey Ceph people, need advise on how to move a ceph-cluster from one datacenter to another without any downtime :) DC 1: 3 dedicated MON-Server (also MGR on this Servers) 4 dedicated OSD-Server (3x12 OSD, 1x 23 OSDs) 3 Proxmox Nodes with connection to our Ceph-Storage (not managed from proxmo

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-02-23 Thread Mehmet
Sage Wrote( Tue, 2 Jan 2018 17:57:32 + (UTC)): Hi Stefan, Mehmet, Hi Sage, Sorry for the *extremly late* response! Are these clusters that were upgraded from prior versions, or fresh luminous installs? My Cluster was initialy installed with jewel (10.2.1) have seen some minor updates

Re: [ceph-users] Dasboard (12.2.1) does not work (segfault and runtime error)

2017-10-22 Thread Mehmet
Hello John, Am 22. Oktober 2017 13:58:34 MESZ schrieb John Spray : >On Fri, Oct 20, 2017 at 10:10 AM, Mehmet wrote: >> Hello, >> >> yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous" >(12.2.1). >> This went realy smooth

Re: [ceph-users] Dasboard (12.2.1) does not work (segfault and runtime error)

2017-10-20 Thread Mehmet
Am 2017-10-20 13:00, schrieb Mehmet: Am 2017-10-20 11:10, schrieb Mehmet: Hello, yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous" (12.2.1). This went realy smooth - Thanks! :) Today i wanted to enable the BuildIn Dasboard via #> vi ceph.

Re: [ceph-users] Dasboard (12.2.1) does not work (segfault and runtime error)

2017-10-20 Thread Mehmet
Am 2017-10-20 11:10, schrieb Mehmet: Hello, yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous" (12.2.1). This went realy smooth - Thanks! :) Today i wanted to enable the BuildIn Dasboard via #> vi ceph.conf [...] [mgr] mgr_modules = dashboard [...]

[ceph-users] Dasboard (12.2.1) does not work (segfault and runtime error)

2017-10-20 Thread Mehmet
Hello, yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous" (12.2.1). This went realy smooth - Thanks! :) Today i wanted to enable the BuildIn Dasboard via #> vi ceph.conf [...] [mgr] mgr_modules = dashboard [...] #> ceph-deploy --overwrite-conf config push monserver1 monserver

Re: [ceph-users] [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor

2017-10-13 Thread Mehmet
Hey guys, Does this mean we have to do additional when upgrading from Jewel 10.2.10 to luminous 12.2.1? - Mehmet Am 9. Oktober 2017 04:02:14 MESZ schrieb kefu chai : >On Mon, Oct 9, 2017 at 8:07 AM, Joao Eduardo Luis wrote: >> This looks a lot like a bug I fixed a week or so ago

Re: [ceph-users] Optimise Setup with Bluestore

2017-08-17 Thread Mehmet
.. use the HDD for RAW and the nvme for Wal and db? Hope you (and others) understand what i mean :) - Mehmet Am 16. August 2017 19:01:30 MESZ schrieb David Turner : >Honestly there isn't enough information about your use case. RBD usage >with small IO vs ObjectStore with large file

Re: [ceph-users] Ceph cluster with SSDs

2017-08-17 Thread Mehmet
Which ssds are used? Are they in production? If so how is your PG Count? Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy : >Hello, >I am using the Ceph cluster with HDDs and SSDs. Created separate pool >for each. >Now, when I ran the "ceph osd bench", HDD's OSDs show around 500 MB/s

Re: [ceph-users] Optimise Setup with Bluestore

2017-08-17 Thread Mehmet
Hey Mark :) Am 16. August 2017 21:43:34 MESZ schrieb Mark Nelson : >Hi Mehmet! > >On 08/16/2017 11:12 AM, Mehmet wrote: >> :( no suggestions or recommendations on this? >> >> Am 14. August 2017 16:50:15 MESZ schrieb Mehmet : >> >> Hi friends, >>

Re: [ceph-users] Optimise Setup with Bluestore

2017-08-16 Thread Mehmet
:( no suggestions or recommendations on this? Am 14. August 2017 16:50:15 MESZ schrieb Mehmet : >Hi friends, > >my actual hardware setup per OSD-node is as follow: > ># 3 OSD-Nodes with >- 2x Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz ==> 12 Cores, no >Hyper-Threading >

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-15 Thread Mehmet
I am Not Sure but perhaps nodown/out could help to Finish? - Mehmet Am 15. August 2017 16:01:57 MESZ schrieb Andreas Calminder : >Hi, >I got hit with osd suicide timeouts while deep-scrub runs on a >specific pg, there's a RH article >(https://access.redhat.com/solutions/21

[ceph-users] Optimise Setup with Bluestore

2017-08-14 Thread Mehmet
? I Would setup the disks via "ceph-deploy". Thanks in advance for your suggestions! - Mehmet ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] One Monitor filling the logs

2017-08-08 Thread Mehmet
I guess this is Related to "debug_mgr": "1/5" But not Sure.. . Give it a try. Hth Mehmet Am 8. August 2017 16:28:21 MESZ schrieb Konrad Riedel : >Hi Ceph users, > >my luminous (ceph version 12.1.1) testcluster is doing fine, except >that >one Monitor is

Re: [ceph-users] osds wont start. asserts with "failed to load OSD map for epoch , got 0 bytes"

2017-07-11 Thread Mehmet
ceph-objectstore-tool before use! - Mehmet Am 1. Juli 2017 01:53:49 MESZ schrieb Mark Guz : >Hi all > >I have two osds that are asserting , see >https://pastebin.com/raw/xmDPg84a > >I am running kraken 11.2.0 and am kinda blocked by this. Anything i >try >to do with these

Re: [ceph-users] OSD node type/count mixes in the cluster

2017-06-18 Thread Mehmet
Hi, We actually using 3xIntel Server with 12 osds and One supermicro with 24 osds in One ceph Cluster Journals on nvme per server. Did not seeing any issues jet. Best Mehmet Am 9. Juni 2017 19:24:40 MESZ schrieb Deepak Naidu : >Thanks David for sharing your experience, appreciate

Re: [ceph-users] Available tools for deploying ceph cluster as a backend storage ?

2017-05-22 Thread Mehmet
Perhaps openATTIC is also an alternativ to admin Ceph. Actually i prefer ceph-deploy Am 18. Mai 2017 15:33:52 MESZ schrieb Shambhu Rajak : >Let me explore the code to my needs. Thanks Chris >Regards, >Shambhu > >From: Bitskrieg [mailto:bitskr...@bitskrieg.net] >Sent: Thursday, May 18, 2017 6:40

[ceph-users] Read from Replica Osds?

2017-05-08 Thread Mehmet
Hi, I thought that Clients do also reads from ceph replicas. Sometimes i Read in the web that this does only happens from the primary pg like how ceph handle writes... so what is True? Greetz Mehmet___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Adding New OSD Problem

2017-05-01 Thread Mehmet
Also i would set osd_crush_initial_weight = 0 In ceph.conf an decrease the Crush weight via Ceph osd Crush reweight osd.36 0.05000 Step by step Am 25. April 2017 23:19:08 MESZ schrieb Reed Dier : >Others will likely be able to provide some better responses, but I’ll >take a shot to see if anyt

Re: [ceph-users] Fujitsu

2017-04-20 Thread Mehmet
). Hope you understand what i mean :) HTH Mehmet Am 20. April 2017 09:19:32 MESZ schrieb "Stolte, Felix" : >Hello cephers, > >is anyone using Fujitsu Hardware for Ceph OSDs with the PRAID EP400i >Raidcontroller in JBOD Mode? We are having three identical servers with >

Re: [ceph-users] Working Ceph guide for Centos 7 ???

2017-04-07 Thread Mehmet
Perhaps ceph-deploy can Work when you disable the "epel" Repo? Purge all and try it again. Am 7. April 2017 04:27:59 MESZ schrieb Travis Eddy : >Here is what I tried: (several times) >Nothing works >The best I got was following the Ceph guide and adding >sudo yum install centos-release-ceph-jew

Re: [ceph-users] active+clean+inconsistent and pg repair

2017-03-19 Thread Mehmet
long /dev/sdX If not repeat the above with new found defekt lba. Ive done this three time successfully - but not with an error on a primary pg. After that you can start the osd with # systemctl start ceph-osd@32 # ceph osd in osd.32 HTH - Mehmet Am 2017-03-17 20:08, schrieb Shain Miley:

[ceph-users] How to prevent blocked requests?

2017-02-24 Thread Mehmet
uth_client_required = cephx osd_crush_initial_weight = 0 mon_osd_full_ratio = 0.90 mon_osd_nearfull_ratio = 0.80 [mon] mon_allow_pool_delete = false [osd] #osd_journal_size = 20480 osd_journal_size = 15360 Please ask if you need more information. Thanks so far. - Mehmet ___

Re: [ceph-users] Objects Stuck Degraded

2017-01-24 Thread Mehmet
Perhaps a deep scrub will cause a scrub error Which you can try to ceph pg repair? Btw. It seems that you use 2 replicas Which is not recommendet except for dev environments. Am 24. Januar 2017 22:58:14 MEZ schrieb Richard Bade : >Hi Everyone, >I've got a strange one. After doing a reweight of

Re: [ceph-users] Ceph is rebalancing CRUSH on every osd add

2017-01-23 Thread Mehmet
I guess this is cause you are using always the Same root tree. Am 23. Januar 2017 10:50:16 MEZ schrieb Sascha Spreitzer : >Hi all > >I reckognized ceph is rebalancing the whole crush map when i add osd's >that should not affect any of my crush rulesets. > >Is there a way to add osd's to the crush

Re: [ceph-users] Ceph pg active+clean+inconsistent

2016-12-21 Thread Mehmet
Hi Andras, Iam not the experienced User but i guess you could have a look on this object on each related osd for the pg, compare them and delete the Different object. I assume you have size = 3. Then again pg repair. But be carefull iirc the replica will be recovered from the primary pg. Hth

Re: [ceph-users] PGs stuck at creating forever

2016-11-02 Thread Mehmet
I would try to Set pgp for your pool equal to 300 #Ceph osd pool yourpool set pgp 300 ...not sure about the command... If that did not help try to restart osd 7 and 15 Hth - Mehmet Am 2. November 2016 14:15:09 MEZ, schrieb Vlad Blando : >​I have a 3 Cluster Giant setup with 8 OSD e

Re: [ceph-users] [EXTERNAL] Re: pg stuck with unfound objects on non exsisting osd's

2016-11-02 Thread Mehmet
.@elchaka.de<mailto:c...@elchaka.de> wrote: >Hello Ronny, > >if it is possible for you, try to Reboot all OSD Nodes. > >I had this issue on my test Cluster and it become healthy after >rebooting. > >Hth >- Mehmet > >Am 1. November 2016 19:55:07 MEZ, schrieb Ronny

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-12 Thread Mehmet
Hey Alexey, sorry - it seems that the log files does not contain the debug message which i got @ the command line here it is - http://slexy.org/view/s20A6m2Tfr Mehmet Am 2016-09-12 15:48, schrieb Alexey Sheplyakov: Hi, This is the actual logfile for osd.10 > - http://slexy.org/v

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-12 Thread Mehmet
://slexy.org/view/s2vrUnNBEW Thanky you for investigations :) HTH kind regards, - Mehmet Am 2016-09-12 15:48, schrieb Alexey Sheplyakov: Hi, This is the actual logfile for osd.10 > - http://slexy.org/view/s21lhpkLGQ [5] Unfortunately this log does not contain any new data -- for some reason the

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-09 Thread Mehmet
: I have done "ceph osd set noout" before stop and flushing. Hope this is useful for you! - Mehmet Best regards,   Alexey On Wed, Sep 7, 2016 at 6:48 PM, Mehmet wrote: Hey again, now i have stopped my osd.12 via root@:~# systemctl stop ceph-osd@12 and when i am flush the journa

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-07 Thread Mehmet
of the executable, or `objdump -rdS ` is needed to interpret this. Segmentation fault The logfile with further information - http://slexy.org/view/s2T8AohMfU I guess i will get same message when i flush the other journals. - Mehmet Am 2016-09-07 13:23, schrieb Mehmet: Hello ceph peo

[ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-07 Thread Mehmet
this will be changed to 2x 10GB Fibre perhaps with LACP when possible. - We do not use Jumbo Frames yet.. - Public and Cluster-Network related Ceph traffic is actualy going through this one active 1GB Interface on each Server. hf - Mehmet

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-30 Thread Mehmet
ery much for your patience and great help! Now lets play a bit with ceph ^^ Best regards, - Mehmet Am 2016-08-30 00:02, schrieb Jean-Charles Lopez: How Mehmet OK so it does come from a rados put. As you were able to check the VM device objet size is 4 MB. So we'll see after you have r

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-29 Thread Mehmet
it see :) - i hope that my next eMail will close this issue. Thank you very much for your help! Best regards, - Mehmet ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-29 Thread Mehmet
Hello JC, in short for the records: What you can try doing is to change the following settings on all the OSDs that host this particular PG and see if it makes things better [osd] [...] osd_scrub_chunk_max = 5 # maximum number of chunks the scrub will

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-29 Thread Mehmet
Hey JC, thank you very much! - My answers inline :) Am 2016-08-26 19:26, schrieb LOPEZ Jean-Charles: Hi Mehmet, what is interesting in the PG stats is that the PG contains around 700+ objects and you said that you are using RBD only in your cluster if IIRC. With the default RBD order (4MB

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-26 Thread Mehmet
the acting set for pg 0.223. Thank you, your help is very appreciated! - Mehmet Am 2016-08-25 13:58, schrieb c...@elchaka.de: Hey JC, Thank you very much for your mail! I will provide the Informations tomorrow when i am at work again. Hope that we will find a solution :) - Mehmet Am 2

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-24 Thread Mehmet
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) Kernel: 4.4.0-31-generic #50-Ubuntu Any ideas? - Mehmet Am 2016-08-02 17:57, schrieb c: Am 2016-08-02 13:30, schrieb c: Hello Guys, this time without the original acting-set osd.4, 16 and 28. The issue still exists... [...] For the record, th