Network issue maybe? Have you checked your firewall settings? Iptables changed
a bit in EL7 and might of broken any rules your normally try and use, try
flushing the rules (iptables -F) and see if that fixes things, if you then
you'll need to fix your firewall rules.
I ran into a similar issue
Hi Blair!
On 9 September 2014 08:47, Blair Bethwaite wrote:
> Hi Dan,
>
> Thanks for sharing!
>
> On 9 September 2014 20:12, Dan Van Der Ster wrote:
>> We do this for some small scale NAS use-cases, with ZFS running in a VM with
>> rbd volumes. The performance is not great (especially since we
Hi Jeff,
What type model drives are you using as OSDs? Any Journals? If so, what model?
What does your ceph.conf look like? What sort of load is on the cluster (if
it's still "online")? What distro/version? Firewall rules set properly?
Michal Kozanecki
-Original Message-
Oh one more thing, the OSD's partitions/drives, how did they get mounted (mount
options)?
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Michal
Kozanecki
Sent: February-17-15 9:27 AM
To: Jeff; ceph-users@lists.ceph.com
Subject: Re:
this as a read error and kicks off a scrub on the PG. PG repair does not seem
to happen automatically, however when manually kicked off it succeeds.
Let me know if there's anything else or any questions people have while I have
this test cluster running.
Cheers,
Michal Kozanecki |
t more to ZFS and "things-to-know" than that (L2ARC uses ARC
metadata space, dedupe uses ARC metadata space, etc), but as far as CEPH is
cocearned the above is a good place to start. ZFS IMHO is a great solution, but
it requires some time and effort to do it right.
Cheers,
Michal Ko
Any quick write performance data?
Michal Kozanecki | Linux Administrator | E: mkozane...@evertz.com
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Alexandre DERUMIER
Sent: April-17-15 11:38 AM
To: Mark Nelson; ceph-users
Subject: [ceph-users
many bad SSDs for CEPH journal, many of the same performance guidelines apply
to SIL/SLOG as well.
Cheers,
Michal Kozanecki | Linux Administrator | E: mkozane...@evertz.com
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of J David
Sent: April
vdevs,
slog, etc) stands.
https://reviews.csiden.org/r/51/
https://www.illumos.org/issues/5027
Cheers,
Michal Kozanecki | Linux Administrator | E: mkozane...@evertz.com
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Michal
Kozanecki
Sent
Hi Kenneth,
I run a small ceph test cluster using ZoL (ZFS on Linux) ontop of CentOS 7, so
I'll try and answer any questions. :)
Yes, ZFS writeparallel support is there, but NOT compiled in by default. You'll
need to compile it with --with-zlib, but that by itself will fail to compile
the ZFS
-users-boun...@lists.ceph.com] On Behalf Of Michal
Kozanecki
Sent: October-29-14 11:33 AM
To: Kenneth Waegeman; ceph-users
Subject: Re: [ceph-users] use ZFS for OSDs
Hi Kenneth,
I run a small ceph test cluster using ZoL (ZFS on Linux) ontop of CentOS 7, so
I'll try and answer any ques
OSDs
hi michal,
thanks for the info. we will certainly try it and see if we come to the same
conclusions ;)
one small detail: since you were using centos7, i'm assuming you were using ZoL
0.6.3?
stijn
On 10/29/2014 08:03 PM, Michal Kozanecki wrote:
> Forgot to mention, when you cr
KVM VMs. I have not tried any sort of dedupe as it is
memory intensive and I only had 24GB of ram on each node. I'll grab some FIO
benchmarks and report back.
Cheers,
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: October-30-14 4:12 AM
To: ceph-users
Cc: M
been shown in the past that separating the journal on SSD
based pools doesn't really do much.
Michal Kozanecki | Linux Administrator | mkozane...@evertz.com
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
den Hollander
Sent: October-28
14 matches
Mail list logo