Hi guys,
I would need your help to figure out performance issues on my ceph cluster.
I've read pretty much every thread on the net concerning this topic but
I didn't manage to have acceptable performances.
In my company, we are planning to replace our existing virtualization
infrastucture NAS b
Hi Remi,
setting inode64 in osd_mount_options_xfs might help a little.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Kevin,
what does an "yum update" ?
Are there any error's ?
Also you can try a "yum clean all"
The problem in this case is most probably not ceph related but with your
local rpm database/repository.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
_
Hello,
I just saw the release announce of infernalis. I will test it in the
meantime.
Rémi
On 07/11/2015 09:24, Rémi BUISSON wrote:
Hi guys,
I would need your help to figure out performance issues on my ceph
cluster.
I've read pretty much every thread on the net concerning this topic
but
Hi Iban,
On 11/06/2015 10:59 PM, Iban Cabrillo wrote:
> Hi Philipp,
> I see you only have 2 osds, have you check that your "osd pool get
> size" is 2, and min_size=1??
yes, the the default and the active values are as you describe (size =
2, min_size = 1).
My idea was to start with a really sma
Hi Oliver,
"yum clean all" and "yum update" do not show any new packages.
I think RDO and Ceph repos don't play nice together which is bad as it prevents
installing the ceph client on new compute-clusters.
My yum-skills are rather limited, my investigation for dependencies did not
return the
Hi List,
I Searching for advice from somebody, who use Legacy client like ESXi with Ceph
I try to build High-performance fault-tolerant storage with Ceph 0.94
In production i have 50+ TB of VMs (~800 VMs)
8 NFS servers each:
2xIntel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
12xSeagate ST2000NM0023
1xLS
You most likely did the wrong test to get baseline Ceph IOPS or of your
ssds. Ceph is really hard on SSDS and it does direct sync writes which
drives handle very different even between models of the same brand. Start
with
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-s
Hi Timofey,
You are most likely experiencing the effects of Ceph's write latency in
combination with the sync write behaviour of ESXi. You will probably struggle
to get much under 2ms write latency with Ceph, assuming a minimum of 2 copies
in Ceph. This will limit you to around 500iops for a QD
FWIW, I'm also having problems connecting to download.ceph.com over IPv6,
from a HE tunnel. I can talk to eu.ceph.com just fine.
I'm seeing 100% failures over HTTP. Here's a traceroute, including my
address:
# traceroute6 download.ceph.com
traceroute to download.ceph.com (2607:f298:6050:51f3:f8
Ah, it looks like a pMTU problem. We negotiate a 1440 byte mss, but that's
apparently too large for something along the way; if you look at the
tcpdump below, you'll see that the sequence number from download.ceph.com
jumps from 1 up to 4285 without the packets inbetween ever arriving.
08:39:23.4
Hello, everyone!
I have recently created a Ceph cluster (v 0.94.5) on Ubuntu 14.04.3 and I
have created an erasure coded pool, which has a caching pool in front of it.
When trying to map RBD images, regardless if they are created in the rbd or
in the erasure coded pool, the operation fails with '
Hi,
I've recently deployed Ceph among four machines, one admin and three nodes,
following the architecture from the ceph.com/qsg and making osd directories
in /var/local. However, when I have tried to activate the OSDs, it breaks,
even when preparing them was successful. I have provided a log of t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
It can't talk to the monitor at 192.168.107.67.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Sat, Nov 7, 2015 at 1:23 PM, James Gallagher wrote:
> Hi,
>
> I've recently deployed Ceph among
On Sat 2015-Nov-07 09:24:06 +0100, Rémi BUISSON wrote:
Hi guys,
I would need your help to figure out performance issues on my ceph cluster.
I've read pretty much every thread on the net concerning this topic
but I didn't manage to have acceptable performances.
In my company, we are planning
15 matches
Mail list logo