Re: [ceph-users] libvirt qemu/kvm/rbd inside VM read slow

2014-03-04 Thread Steffen Thorhauer
Hi, higher /sys/block/vda/queue/read_ahead_kb inside the VM tip from Wido den Hollander helped me a lot, but I don't remember, why I needed several tries to get it really running. You may also read the "[ceph-users] RBD+KVM problems with sequential read" thread: http://lists.ceph.com/pipermai

Re: [ceph-users] Trying to rescue a lost quorum

2014-03-04 Thread Marc
UPDATE. I have determined mon sync heartbeat timeout to be triggering since increasing it also increases the duration of the sync attempts. Could those heartbeats be quorum-related? Thatd explain why they aren't being sent. Also is it safe to temporarily increase this timeout to say an hour or two

[ceph-users] centos ceph-libs-0.67 conflicts with 0.72 upgrade

2014-03-04 Thread Jonathan Gowar
I've a 3 OSD and 1 admin node cluster, running Debian 7 and Ceph 0.72. I'd like to add XenServer tech-preview node too. I'm trying to run ceph-deloy install xen-dev (xen-dev CentOS 6), but it fails with these sorts of messages:- [xen-dev][WARNIN] file /usr/lib64/librados.so.2.0.0 from install o

[ceph-users] correct way to increase the weight of all OSDs from 1 to 3.64

2014-03-04 Thread Udo Lembke
Hi all, I have startet the ceph-cluster with an weight of 1 for all osd-disks (4TB). Later I switched to ceph-deploy and ceph-deploy use normaly an weight of 3.64 for this disks, which makes much more sense! Now I wan't to change the weight of all 52 osds (on 4 nodes) to 3.64 and the question is,

Re: [ceph-users] correct way to increase the weight of all OSDs from 1 to 3.64

2014-03-04 Thread Sage Weil
The goal should be to increase the weights in unison, which should prevent any actual data movement (modulo some rounding error, perhaps). At the moment that can't be done via the CLI, but you can: ceph osd getcrusshmap -o /tmp/cm crushtool -i /tmp/cm --reweight-item osd.0 3.5 --reweight-item

[ceph-users] Replace OSD with larger

2014-03-04 Thread Chris Dunlop
Hi, What is the recommended procedure for replacing an osd with a larger osd in a safe and efficient manner, i.e. whilst maintaining redundancy and causing the least data movement? Would this be a matter of adding the new osd into the crush map whilst reducing the weight of the old osd to zero, t

Re: [ceph-users] [rgw] increase the first chunk size

2014-03-04 Thread Ray Lv
Hi Yehuda, That¹s great. Is that backward compatiable with the previous configuration settings? That is to set rgw_max_chunk_size to 512 KB first and put some objects in size between 50 KB - 10 MB, and then set rgw_max_chunk_size to 1 MB, radosgw can read out the previously put objects. Thanks, R

[ceph-users] "full ratio" - how does this work with multiple pools on seprate OSDs?

2014-03-04 Thread Barnes, Thomas J
I have a question about how "full ratio" works. How does a single "full ratio" setting work when the cluster has pools associated with different drives? For example, let's say I have a cluster comprised of fifty 10K RPM drives and fifty 7200 RPM drives. I segregate the 10K drives and 7200RM dr

Re: [ceph-users] "full ratio" - how does this work with multiple pools on seprate OSDs?

2014-03-04 Thread Gregory Farnum
The setting is calculated per-OSD, and if any OSD hits the hard limit the whole cluster transitions to the full state and stops accepting writes until the situation is resolved. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Mar 4, 2014 at 9:58 AM, Barnes, Thomas J wr

Re: [ceph-users] correct way to increase the weight of all OSDs from 1 to 3.64

2014-03-04 Thread Udo Lembke
Hi Sage, thanks for the info! I will tried at weekend. Udo Am 04.03.2014 15:16, schrieb Sage Weil: > The goal should be to increase the weights in unison, which should prevent > any actual data movement (modulo some rounding error, perhaps). At the > moment that can't be done via the CLI, but

[ceph-users] Looking for people using Ceph and OpenNebula in the NL area

2014-03-04 Thread Jaime Melis
Dear all, if you are using OpenNebula and Ceph and you'd like to share your experiences in a presentation using both tools, you might be interested in attending the OpenNebula Cloud Technology Day that will take place in Ede, Netherlands, the 26th of March. http://opennebula.org/community/techday

Re: [ceph-users] [rgw] increase the first chunk size

2014-03-04 Thread Yehuda Sadeh
Increasing that shouldn't be problematic. The real issue is when decreasing it. First, you'd be throwing object atomicity out the window so with concurrent readers and writers to the same object you might end up having a reader getting inconsistent data. And second, it hasn't really been tested. Y

[ceph-users] Ceph jobs?

2014-03-04 Thread Ivo Jimenez
Is there a listing of "Ceph Jobs" somewhere on the net (besides Inktank's)? If so, can someone point me to it? thanks a lot! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Patrick McGarry
I'm not sure there is a canonical resource...but maybe this would be a good addition to the wiki. Let me see if I can aggregate a few of the jobs I know about and post a page. Thanks. Best Regards, Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlem

Re: [ceph-users] "full ratio" - how does this work with multiple pools on seprate OSDs?

2014-03-04 Thread Barnes, Thomas J
OK - Thanks Greg. This suggests to me that if you want to prevent the cluster from locking up, you need to monitor the "fullness" of each OSD, and not just the utilization of the entire cluster's capacity. It also suggests that if you want to remove a server from the cluster, you need to cal

Re: [ceph-users] "full ratio" - how does this work with multiple pools on seprate OSDs?

2014-03-04 Thread Barnes, Thomas J
Here is another full ratio scenario: Let's say that the cluster map is configured as follows: Row | -- | | Rack1 Rack2 | | Host1 Host4 Host2 Host5 Host3 Host6 ...with a ruleset that distributes repl

Re: [ceph-users] "full ratio" - how does this work with multiple pools on seprate OSDs?

2014-03-04 Thread Gregory Farnum
It will only use one rack bucket, but the PGs will move into a "backfill_toofull" state in order to prevent directly filling up the cluster. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Mar 4, 2014 at 12:02 PM, Barnes, Thomas J wrote: > Here is another full ratio sce

Re: [ceph-users] High fs_apply_latency on one node

2014-03-04 Thread Gregory Farnum
[ Re-adding the list. ] On Mon, Mar 3, 2014 at 3:28 PM, Chris Kitzmiller wrote: > On Mar 3, 2014, at 4:19 PM, Gregory Farnum wrote: >> The apply latency is how long it's taking for the backing filesystem to ack >> (not sync to disk) writes from the OSD. Either it's getting a lot more >> writes

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Loic Dachary
Hi, http://ceph.com/community/careers/ Has non inktank Ceph jobs ;-) Cheers On 04/03/2014 19:06, Ivo Jimenez wrote: > Is there a listing of "Ceph Jobs" somewhere on the net (besides Inktank's)? > If so, can someone point me to it? > > thanks a lot! > > >

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Patrick McGarry
Loic, is this something we could move into a publicly-editable state (wiki), or do we need more review capability b/c it involves specific entities? Best Regards, Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank On Tue,

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Loic Dachary
On 04/03/2014 22:37, Patrick McGarry wrote: > Loic, is this something we could move into a publicly-editable state > (wiki), or do we need more review capability b/c it involves specific > entities? A wiki page would be a good fit indeed. Is there really a risk that fake job offerings are poste

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Patrick McGarry
I wouldn't think so, just wanted to see if someone else had a strong feeling about it. I vote wiki. :) Best Regards, Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank On Tue, Mar 4, 2014 at 10:39 PM, Loic Dachary wrote:

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Wido den Hollander
> Op 4 mrt. 2014 om 22:56 heeft "Patrick McGarry" het > volgende geschreven: > > Loic, is this something we could move into a publicly-editable state > (wiki), or do we need more review capability b/c it involves specific > entities? > Let's just wait how it plays out. If fake jobs come up,

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Patrick McGarry
+1...to the wiki-cave batman! :P Best Regards, Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank On Tue, Mar 4, 2014 at 10:57 PM, Wido den Hollander wrote: > > > > >> Op 4 mrt. 2014 om 22:56 heeft "Patrick McGarry" het

Re: [ceph-users] Ceph jobs?

2014-03-04 Thread Ivo Jimenez
thanks! Hadn't noticed there were a couple of Suse openings there. On Tue, Mar 4, 2014 at 1:35 PM, Loic Dachary wrote: > Hi, > > http://ceph.com/community/careers/ > > Has non inktank Ceph jobs ;-) > > Cheers > > On 04/03/2014 19:06, Ivo Jimenez wrote: > > Is there a listing of "Ceph Jobs" some

[ceph-users] Enabling discard/trim

2014-03-04 Thread ljm李嘉敏
Dear all, I try to use ceph block device within my VM, and configure the vm following the steps in http://ceph.com/docs/dumpling/rbd/libvirt/, eventually I can see the logical device in the VM. Then I want to enabling the discard/trim for this logical device, and add the parameter discard_gran

[ceph-users] mds cluster degraded (some RBD lost)

2014-03-04 Thread kenneth
Title: Thanks We have a three-node ceph cluster. Node#1 MDS#1 - primary MON#1 3 OSDs OS running from a USB stick Node#2 MDS#2 - standby MON#2 2 OSDs OS running from a hard drive

[ceph-users] object striping using librbd/librados(Firefly)

2014-03-04 Thread 張峻宇
Hi all, Good day. I am facing some problems with Ceph. Hope you guys can help me ! Here are my questions: 1. I want object striping storing in multi-OSDs so I use ‘librbd’ to store object. Every time I put an object I just create a RBD( that mean every object I puted is equal

Re: [ceph-users] Enabling discard/trim

2014-03-04 Thread Alexandre DERUMIER
Hi, you should have -drive file=rbd:libvirt,discard=on in command line, to have discard enabled - Mail original - De: "ljm李嘉敏" À: ceph-us...@ceph.com Envoyé: Mercredi 5 Mars 2014 02:37:52 Objet: [ceph-users] Enabling discard/trim Dear all, I try to use ceph block devic

Re: [ceph-users] mds cluster degraded (some RBD lost)

2014-03-04 Thread Wido den Hollander
On 03/05/2014 06:52 AM, kenneth wrote: We have a three-node ceph cluster. Node#1 MDS#1 - primary MON#1 3 OSDs OS running from a USB stick Node#2 MDS#2 - standby MON#2 2 OSDs OS running from a hard drive Node#3 MDS#3 - standby MON#3 2 OSDs OS running from a hard drive The USB stick of Node1 fa

[ceph-users] 答复: Enabling discard/trim

2014-03-04 Thread ljm李嘉敏
Thank you very much, I will have a try. Thanks & Regards Li JiaMin System Cloud Platform 3#4F108 -邮件原件- 发件人: Alexandre DERUMIER [mailto:aderum...@odiso.com] 发送时间: 2014年3月5日 15:08 收件人: ljm李嘉敏 抄送: ceph-us...@ceph.com 主题: Re: [ceph-users] Enabling discard/trim Hi, you should have -drive

[ceph-users] new install

2014-03-04 Thread kenneth
Title: Thanks Hi all, I'm trying create ceph cluster with 3 nodes, is it a requirement to use ceph-deploy for deployment? Is it also required to use a seperate admin node? Also, how to you recommend using journal on seperate disk? For example if I have two O