> Another issue is performance : you'll get 4x more IOPS with 4 x 2TB drives
> than with one single 8TB.
> So if you have a performance target your money might be better spent on
> smaller drives
Regardless of the discussion if it is smart to have very large spinners:
Be aware that some of the b
Hello, we are experiencing severe OSD timeouts, OSDs are not taken out and
we see the following in syslog on Ubuntu 14.04.2 with Firefly 0.80.9.
Thank you for any advice.
Alex
Jul 3 03:42:06 roc-4r-sca020 kernel: [554036.261899] BUG: unable to handle
kernel paging request at 0019001c
J
Hi there,
maybe you could be so kind and help me with following issue:
We running Ceph FS but there's repeatedly a problem with the MDS.
Sometimes following error occurs: "mds0: Client 701782 failing to
respond to capability release"
Listing the session informations shows that the "num_caps" o
On 07/03/2015 10:25 AM, Mathias Buresch wrote:
> Hi there,
>
> maybe you could be so kind and help me with following issue:
>
> We running Ceph FS but there's repeatedly a problem with the MDS.
>
What version of Ceph are you running?
> Sometimes following error occurs: "mds0: Client 701782 fai
Hi,
We're looking at similar issues here and I was composing a mail just
as you sent this. I'm just a user -- hopefully a dev will correct me
where I'm wrong.
1. A CephFS cap is a way to delegate permission for a client to do IO
with a file knowing that other clients are not also accessing that
f
Hi Dan,
thanks for quick reply!
Didnt read in detail yet but here are my first comments:
3.b BTW, our old friend updatedb seems to trigger the same problem..
grabbing caps very quickly as it indexes CephFS. updatedb.conf is
configured to PRUNEFS="... fuse ...", but CephFS has type
fuse.ceph-fus
What’s the value of /proc/sys/vm/min_free_kbytes on your system? Increase it to
256M (better do it if there’s lots of free memory) and see if it helps.
It can also be set too high, hard to find any formula how to set it correctly...
Jan
> On 03 Jul 2015, at 10:16, Alex Gorbachev wrote:
>
> He
Hi everyone,
I am currently looking at Ceph to build a cluster to backup VMs. I am
leveraging the solution against others like traditionnal SANs, etc. and to
this point Ceph is economically more interesting and technically more
challenging (not to bother me :) ).
OSD hosts should be based on Dell
HI Adrien. I can offer some feedback, and have a couple of questions myself:
1) if you’re going to deploy 9x4TB OSDs per host, with 7 hosts, and 4+2 EC, do
you really want to put extras OSDs in ‘inner” drive bays if the target capacity
is 100TB? My rough calculations indicate 150TB usable ca
Hello list,
Does anybody know any recent (has to be built after this CVE-2015-3456 vuln was
fixed) kvm packages out there for Debian Wheezy which are compiled with RBD
support?
I have these installed right now:
ii kvm 1:1.1.2+dfsg-6+deb7u8 amd64
Thanks Jan. /proc/sys/vm/min_free_kbytes was set to 32M, I set it to 256M
with system having 64 GB RAM. Also my swappiness was set to 0, no problems
in lab tests, but I wonder if we hit some limit on 24/7 OSD operation.
I will update after some days of running with these parameter. Best
regards
Hi,
I've upgraded to from hammer 0.94.1 to 0.94.2 and
radosgw-agent-1.2.2-0.el7.centos.noarch from 1.2.1 and after restart of
the agent (with versioned set to true), I noticed duplicate objects in a
version-ed enabled bucket on the replication site.
For example:
on source side:
object:
vers
You may not want to set your heartbeat grace so high, it will make I/O
block for a long time in the case of a real failure. You may want to look
at increasing down reporters instead.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Jul 2, 2015 9:39 PM, "Tuomas Juntunen"
wrote
13 matches
Mail list logo