[ceph-users] an osd feign death,but ceph health is ok

2016-01-11 Thread hnuzhoulin
Hi,guys. right now,I face a problem in my openstack+ceph. some vm can not start and some occur blue screen。 the output of ceph -s say the cluster is OK. So I using following command to check the volume first: rbd ls -p volumes|while read line;do rbd info $line -p volumes ;done then quickly I ge

[ceph-users] ceph instability problem

2016-01-11 Thread Csaba Tóth
Dear Ceph Developers! First of all i would tell you i love this software! I am still a beginner using Ceph, but i like it very much, and i see the potential in it, so i would use it in the future too, if i can. A little background before i tell my problem: I have a smaller bunch of servers, 5 for

Re: [ceph-users] ceph osd tree output

2016-01-11 Thread Wade Holler
Deployment method: ceph-deploy Centos 7.2, systemctl Infernalis. This also happened when I was testing @ Jewell. I am restarting ( or stop/start ) the ceph old processes ( after they die or something ), with: systemctl stop|start|restart ceph.target Is there another way that it is more appropri

Re: [ceph-users] ceph osd tree output

2016-01-11 Thread John Spray
On Mon, Jan 11, 2016 at 10:32 PM, Wade Holler wrote: > Does anyone else have any suggestions here? I am increasingly concerned > about my config if other folks aren't seeing this. > > I could change to a manual crushmap but otherwise have no need to. What did you use to deploy ceph? What init sy

Re: [ceph-users] ceph osd tree output

2016-01-11 Thread Wade Holler
Does anyone else have any suggestions here? I am increasingly concerned about my config if other folks aren't seeing this. I could change to a manual crushmap but otherwise have no need to. I emailed the Ceph-dev list but have not had a response yet. Best Regards Wade On Fri, Jan 8, 2016 at 11:1

Re: [ceph-users] double rebalance when removing osd

2016-01-11 Thread Shinobu Kinjo
I'm not pretty sure about how it works internally. But if 0.0 works fine to you, that's good. Rgds, Shinobu - Original Message - From: "Rafael Lopez" To: "Shinobu Kinjo" Cc: "Andy Allan" , ceph-users@lists.ceph.com Sent: Tuesday, January 12, 2016 7:20:37 AM Subject: Re: [ceph-users] dou

Re: [ceph-users] double rebalance when removing osd

2016-01-11 Thread Rafael Lopez
I removed some osds from a host yesterday using the reweight method and it worked well. There was only one rebalance and then I could perform the rest of the documented removal steps immediately with no further recovery. I reweighted to 0.0. Shinobu, can you explain why you have found 0.2 is bette

Re: [ceph-users] double rebalance when removing osd

2016-01-11 Thread Shinobu Kinjo
Based on my research, 0.2 is better than 0.0. Probably it depends though. > ceph osd crush reweight osd.X 0.0 Rgds, Shinobu - Original Message - From: "Andy Allan" To: "Rafael Lopez" Cc: ceph-users@lists.ceph.com Sent: Monday, January 11, 2016 8:08:38 PM Subject: Re: [ceph-users] doub

Re: [ceph-users] using cache-tier with writeback mode, raods bench result degrade

2016-01-11 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 https://github.com/ceph/ceph/pull/7024 - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Jan 11, 2016 at 1:47 PM, Robert LeBlanc wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256

Re: [ceph-users] using cache-tier with writeback mode, raods bench result degrade

2016-01-11 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Currently set as DNM. :( I guess the author has not updated the PR as requested. If needed, I can probably submit a new PR as we would really like to see this in the next Hammer release. I just need to know if I need to get involved. I don't want to

Re: [ceph-users] where is the fsid field coming from in ceph -s ?

2016-01-11 Thread Oliver Dzombic
Hi Greg, thank you for your time ! In my situation, i overwrite the old ID with the new one. I dont know how. I thought thats impossible, but a running cluster with 4 mon's suddenly just changed its ID. So the cluster has now the new ID. As i can see, i cant change the ID running some command.

Re: [ceph-users] where is the fsid field coming from in ceph -s ?

2016-01-11 Thread Gregory Farnum
On Sat, Jan 9, 2016 at 1:58 AM, Oliver Dzombic wrote: > Hi, > > fighting to add a new mon it somehow happend by mistake, that a new > cluster id got generated. > > So the output of "ceph -s" show a new cluster id. > > But the osd/mon are still running on the old cluster id. > > Changing the osd/mo

Re: [ceph-users] double rebalance when removing osd

2016-01-11 Thread Steve Taylor
Rafael, Yes, the cluster still rebalances twice when removing a failed osd. An osd that is marked out for any reason but still exists in the crush map gets its placement groups remapped to different osds until it comes back in, at which point those pgs are remapped back. When an osd is removed

Re: [ceph-users] Infernalis upgrade breaks when journal on separate partition

2016-01-11 Thread Stillwell, Bryan
On 1/10/16, 2:26 PM, "ceph-users on behalf of Stuart Longland" wrote: >On 05/01/16 07:52, Stuart Longland wrote: >>> I ran into this same issue, and found that a reboot ended up setting >>>the >>> > ownership correctly. If you look at >>>/lib/udev/rules.d/95-ceph-osd.rules >>> > you'll see the m

Re: [ceph-users] using cache-tier with writeback mode, raods bench result degrade

2016-01-11 Thread Nick Fisk
Looks like it has been done https://github.com/zhouyuan/ceph/commit/f352b8b908e8788d053cbe15fa3632b226a6758d > -Original Message- > From: Robert LeBlanc [mailto:rob...@leblancnet.us] > Sent: 08 January 2016 18:23 > To: Nick Fisk > Cc: Wade Holler ; hnuzhoulin > ; Ceph-User > Subject: R

Re: [ceph-users] double rebalance when removing osd

2016-01-11 Thread Andy Allan
On 11 January 2016 at 02:10, Rafael Lopez wrote: > @Steve, even when you remove due to failing, have you noticed that the > cluster rebalances twice using the documented steps? You may not if you don't > wait for the initial recovery after 'ceph osd out'. If you do 'ceph osd out' > and immedia

Re: [ceph-users] double rebalance when removing osd

2016-01-11 Thread Henrik Korkuc
On 16-01-11 04:10, Rafael Lopez wrote: Thanks for the replies guys. @Steve, even when you remove due to failing, have you noticed that the cluster rebalances twice using the documented steps? You may not if you don't wait for the initial recovery after 'ceph osd out'. If you do 'ceph osd out'

Re: [ceph-users] krdb vDisk best practice ?

2016-01-11 Thread Wolf F.
Just in case anyone in future comes up with the same question: I ran the following Test-case: 3 identical Debian VM's. 4GB Ram, 4 vCores. Virtio for vDisks. On the same Pool. vDisks mounted at /home/test 1x 120GB 12x 10GB JBOD via LVM 12x 10GB Raid 0 Then separately i wrote 100GB of Data us

[ceph-users] How to configure placement_targets?

2016-01-11 Thread Yang Honggang
The parameter passed to create_bucket was wrong. The right way: // Create bucket 'mmm-1' in placement target 'fast-placement' // 'bj' is my region name, 'fast-placement' is my placement target name. bucket = conn.create_bucket('mmm-1', location='*bj:fast-placement*') // Create bucket 'mmm-2' in