Hi,
higher /sys/block/vda/queue/read_ahead_kb inside the VM tip from Wido
den Hollander
helped me a lot, but I don't remember, why I needed several tries
to get it really running.
You may also read the "[ceph-users] RBD+KVM problems with sequential
read" thread:
http://lists.ceph.com/pipermai
UPDATE. I have determined mon sync heartbeat timeout to be triggering
since increasing it also increases the duration of the sync attempts.
Could those heartbeats be quorum-related? Thatd explain why they aren't
being sent. Also is it safe to temporarily increase this timeout to say
an hour or two
I've a 3 OSD and 1 admin node cluster, running Debian 7 and Ceph 0.72.
I'd like to add XenServer tech-preview node too.
I'm trying to run ceph-deloy install xen-dev (xen-dev CentOS 6), but it
fails with these sorts of messages:-
[xen-dev][WARNIN] file /usr/lib64/librados.so.2.0.0 from install o
Hi all,
I have startet the ceph-cluster with an weight of 1 for all osd-disks (4TB).
Later I switched to ceph-deploy and ceph-deploy use normaly an weight of
3.64 for this disks, which makes much more sense!
Now I wan't to change the weight of all 52 osds (on 4 nodes) to 3.64 and
the question is,
The goal should be to increase the weights in unison, which should prevent
any actual data movement (modulo some rounding error, perhaps). At the
moment that can't be done via the CLI, but you can:
ceph osd getcrusshmap -o /tmp/cm
crushtool -i /tmp/cm --reweight-item osd.0 3.5 --reweight-item
Hi,
What is the recommended procedure for replacing an osd with a larger osd
in a safe and efficient manner, i.e. whilst maintaining redundancy and
causing the least data movement?
Would this be a matter of adding the new osd into the crush map whilst
reducing the weight of the old osd to zero, t
Hi Yehuda,
That¹s great. Is that backward compatiable with the previous configuration
settings? That is to set rgw_max_chunk_size to 512 KB first and put some
objects in size between 50 KB - 10 MB, and then set rgw_max_chunk_size to
1 MB, radosgw can read out the previously put objects.
Thanks,
R
I have a question about how "full ratio" works.
How does a single "full ratio" setting work when the cluster has pools
associated with different drives?
For example, let's say I have a cluster comprised of fifty 10K RPM drives and
fifty 7200 RPM drives. I segregate the 10K drives and 7200RM dr
The setting is calculated per-OSD, and if any OSD hits the hard limit
the whole cluster transitions to the full state and stops accepting
writes until the situation is resolved.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Mar 4, 2014 at 9:58 AM, Barnes, Thomas J
wr
Hi Sage,
thanks for the info! I will tried at weekend.
Udo
Am 04.03.2014 15:16, schrieb Sage Weil:
> The goal should be to increase the weights in unison, which should prevent
> any actual data movement (modulo some rounding error, perhaps). At the
> moment that can't be done via the CLI, but
Dear all,
if you are using OpenNebula and Ceph and you'd like to share your
experiences in a presentation using both tools, you might be interested in
attending the OpenNebula Cloud Technology Day that will take place in Ede,
Netherlands, the 26th of March.
http://opennebula.org/community/techday
Increasing that shouldn't be problematic. The real issue is when
decreasing it. First, you'd be throwing object atomicity out the
window so with concurrent readers and writers to the same object you
might end up having a reader getting inconsistent data. And second, it
hasn't really been tested.
Y
Is there a listing of "Ceph Jobs" somewhere on the net (besides Inktank's)?
If so, can someone point me to it?
thanks a lot!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm not sure there is a canonical resource...but maybe this would be a
good addition to the wiki. Let me see if I can aggregate a few of the
jobs I know about and post a page. Thanks.
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlem
OK - Thanks Greg. This suggests to me that if you want to prevent the cluster
from locking up, you need to monitor the "fullness" of each OSD, and not just
the utilization of the entire cluster's capacity.
It also suggests that if you want to remove a server from the cluster, you need
to cal
Here is another full ratio scenario:
Let's say that the cluster map is configured as follows:
Row
|
--
| |
Rack1 Rack2
| |
Host1 Host4
Host2 Host5
Host3 Host6
...with a ruleset that distributes repl
It will only use one rack bucket, but the PGs will move into a
"backfill_toofull" state in order to prevent directly filling up the
cluster.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Mar 4, 2014 at 12:02 PM, Barnes, Thomas J
wrote:
> Here is another full ratio sce
[ Re-adding the list. ]
On Mon, Mar 3, 2014 at 3:28 PM, Chris Kitzmiller
wrote:
> On Mar 3, 2014, at 4:19 PM, Gregory Farnum wrote:
>> The apply latency is how long it's taking for the backing filesystem to ack
>> (not sync to disk) writes from the OSD. Either it's getting a lot more
>> writes
Hi,
http://ceph.com/community/careers/
Has non inktank Ceph jobs ;-)
Cheers
On 04/03/2014 19:06, Ivo Jimenez wrote:
> Is there a listing of "Ceph Jobs" somewhere on the net (besides Inktank's)?
> If so, can someone point me to it?
>
> thanks a lot!
>
>
>
Loic, is this something we could move into a publicly-editable state
(wiki), or do we need more review capability b/c it involves specific
entities?
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue,
On 04/03/2014 22:37, Patrick McGarry wrote:
> Loic, is this something we could move into a publicly-editable state
> (wiki), or do we need more review capability b/c it involves specific
> entities?
A wiki page would be a good fit indeed. Is there really a risk that fake job
offerings are poste
I wouldn't think so, just wanted to see if someone else had a strong
feeling about it. I vote wiki. :)
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue, Mar 4, 2014 at 10:39 PM, Loic Dachary wrote:
> Op 4 mrt. 2014 om 22:56 heeft "Patrick McGarry" het
> volgende geschreven:
>
> Loic, is this something we could move into a publicly-editable state
> (wiki), or do we need more review capability b/c it involves specific
> entities?
>
Let's just wait how it plays out. If fake jobs come up,
+1...to the wiki-cave batman! :P
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue, Mar 4, 2014 at 10:57 PM, Wido den Hollander wrote:
>
>
>
>
>> Op 4 mrt. 2014 om 22:56 heeft "Patrick McGarry" het
thanks! Hadn't noticed there were a couple of Suse openings there.
On Tue, Mar 4, 2014 at 1:35 PM, Loic Dachary wrote:
> Hi,
>
> http://ceph.com/community/careers/
>
> Has non inktank Ceph jobs ;-)
>
> Cheers
>
> On 04/03/2014 19:06, Ivo Jimenez wrote:
> > Is there a listing of "Ceph Jobs" some
Dear all,
I try to use ceph block device within my VM, and configure the vm following
the steps in http://ceph.com/docs/dumpling/rbd/libvirt/,
eventually I can see the logical device in the VM.
Then I want to enabling the discard/trim for this logical device, and add the
parameter discard_gran
Title: Thanks
We have a three-node ceph cluster.
Node#1
MDS#1 - primary
MON#1
3 OSDs
OS running from a USB stick
Node#2
MDS#2 - standby
MON#2
2 OSDs
OS running from a hard drive
Hi all,
Good day. I am facing some problems with Ceph. Hope you guys can help me !
Here are my questions:
1. I want object striping storing in multi-OSDs so I use ‘librbd’ to store
object.
Every time I put an object I just create a RBD( that mean every object I puted
is equal
Hi,
you should have -drive file=rbd:libvirt,discard=on in command line, to
have discard enabled
- Mail original -
De: "ljm李嘉敏"
À: ceph-us...@ceph.com
Envoyé: Mercredi 5 Mars 2014 02:37:52
Objet: [ceph-users] Enabling discard/trim
Dear all,
I try to use ceph block devic
On 03/05/2014 06:52 AM, kenneth wrote:
We have a three-node ceph cluster.
Node#1
MDS#1 - primary
MON#1
3 OSDs
OS running from a USB stick
Node#2
MDS#2 - standby
MON#2
2 OSDs
OS running from a hard drive
Node#3
MDS#3 - standby
MON#3
2 OSDs
OS running from a hard drive
The USB stick of Node1 fa
Thank you very much, I will have a try.
Thanks & Regards
Li JiaMin
System Cloud Platform
3#4F108
-邮件原件-
发件人: Alexandre DERUMIER [mailto:aderum...@odiso.com]
发送时间: 2014年3月5日 15:08
收件人: ljm李嘉敏
抄送: ceph-us...@ceph.com
主题: Re: [ceph-users] Enabling discard/trim
Hi,
you should have -drive
Title: Thanks
Hi all,
I'm trying create ceph cluster with 3 nodes, is it a requirement
to use ceph-deploy for deployment? Is it also required to use a
seperate admin node?
Also, how to you recommend using journal on seperate disk? For
example if I have two O
32 matches
Mail list logo