On 09. nov. 2017 22:52, Marc Roos wrote:
I added an erasure k=3,m=2 coded pool on a 3 node test cluster and am
getting these errors.
pg 48.0 is stuck undersized for 23867.00, current state
active+undersized+degraded, last acting [9,13,2147483647,7,2147483647]
pg 48.1 is stuck und
On 11/10/2017 7:17 AM, Sébastien VIGNERON wrote:
> Hi everyone,
>
> Beginner with Ceph, i’m looking for a way to do a 3-way replication
> between 2 datacenters as mention in ceph docs (but not describe).
>
> My goal is to keep access to the data (at least read-only access) even
> when the link betw
On Fri, Nov 10, 2017 at 4:29 AM, Robert Stanford
wrote:
>
> In my cluster, rados bench shows about 1GB/s bandwidth. I've done some
> tuning:
>
> [osd]
> osd op threads = 8
> osd disk threads = 4
> osd recovery max active = 7
>
>
> I was hoping to get much better bandwidth. My network can handle
Hi Rudi,
On Thu, Nov 09, 2017 at 06:52:04PM +0200, Rudi Ahlers wrote:
> Hi Caspar,
>
> Is this in the [global] or [osd] section of ceph.conf?
Set it in the global section.
>
> I am new to ceph so this is all still very vague to me.
> What is the difference betwen the WAL and the DB?
https://pve.pr
On Thu, Nov 09, 2017 at 05:38:46PM +0100, Caspar Smit wrote:
> 2017-11-09 17:02 GMT+01:00 Alwin Antreich :
>
> > Hi Rudi,
> > On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> > > Hi,
> > >
> > > Can someone please tell me what the correct procedure is to upgrade a
> > CEPH
> > > journ
2017-11-09 17:52 GMT+01:00 Rudi Ahlers :
> Hi Caspar,
>
> Is this in the [global] or [osd] section of ceph.conf?
>
> I've put it in the [global] section but it could be that it belongs in the
[osd], the parameter is not really documented for that matter.
> I am new to ceph so this is all still
I have often seen a problem where a single osd in an eternal deep scrup
will hang any client trying to connect. Stopping or restarting that
single OSD fixes the problem.
Do you use snapshots?
Here's what the scrub bug looks like (where that many seconds is 14 hours):
> ceph daemon "osd.$osd_numb
Hello,
is it possible to prefer a monitor/mgr, so that it become the leader monitor?
What are the important params during monitor election?
Rank could be an parameter, but is it possible to set the rank, perhaps in
ceph.conf?
Thanks for your answer.
Regards,
Erik
__
Hi,
Anybody has experience with live migration features?
Thanks a lot in advance.
Óscar Segarra
El 7 nov. 2017 14:02, "Oscar Segarra" escribió:
> Hi,
>
> In my environment I'm working with a 3 node ceph cluster based on Centos 7
> and KVM. My VM is a clone of a protected snapshot as is sugges
The bandwidth of the network is much higher than that. The bandwidth I
mentioned came from "rados bench" output, under the "Bandwidth (MB/sec)"
row. I see from comparing mine to others online that mine is pretty good
(relatively). But I'd like to get much more than that.
Does "rados bench" sho
Hello,
I have some issues to restart down OSDs.
My cluster is running on debian stretch (with backported kernel 4.13.0)
with luminous version (12.2.0).
An admin changed the fsid and did restart the OSDs of one machine. I
don't know if it can be the cause of all of this but my cluster is in
So you are using a 40 / 100 gbit connection all the way to your client?
John's question is valid because 10 gbit = 1.25GB/s ... subtract some
ethernet, ip, tcp and protocol overhead take into account some
additional network factors and you are about there...
Denes
On 11/10/2017 05:10 PM, R
Thank you for that excellent observation. Are there any rumors / has
anyone had experience with faster clusters, on faster networks? I wonder
how Ceph can get ("it depends"), of course, but I wonder about numbers
people have seen.
On Fri, Nov 10, 2017 at 10:31 AM, Denes Dolhay wrote:
> So you
But sorry, this was about "rados bench" which is run inside the Ceph
cluster. So there's no network between the "client" and my cluster.
On Fri, Nov 10, 2017 at 10:35 AM, Robert Stanford
wrote:
>
> Thank you for that excellent observation. Are there any rumors / has
> anyone had experience w
FWIW, on very fast drives you can achieve at least 1.4GB/s and 30K+
write IOPS per OSD (before replication). It's quite possible to do
better but those are recent numbers on a mostly default bluestore
configuration that I'm fairly confident to share. It takes a lot of
CPU, but it's possible.
Hi
Not to sure what you are looking for but these are the type of
performance numbers we are getting on our jewel 10.2. install
We have tweaked things up a bit to get better write performance.
all writes using fio - libio for 2 minute warm and 10 minute run
6 node cluster - spinning disk with s
Hi Mark,
It will be interesting to know:
The impact of replication. I guess it will decrease by a higher factor
than the replica count.
I assume you mean the 30K IOPS per OSD is what the client sees, if so
the OSD raw disk itself will be doing more IOPS, is this correct and if
so what is the
Hi.
NFSv3 is a bit different than v4. In the case of v3, you need to mount
the full path, rather than the Pseudo path. This can cause problems for
cephfs, because you probably exported / from cephfs.
A good solution to this is to set
mount_path_pseudo = true;
in the NFS_CORE_PARAM block.
rados benchmark is a client application that simulates client io to
stress the cluster. This applies whether you run the test from an
external client or from a cluster server that will act as a client. For
fast clusters it the client will saturate (cpu/net) before the cluster
does. To get accurate
On 11/10/2017 12:21 PM, Maged Mokhtar wrote:
Hi Mark,
It will be interesting to know:
The impact of replication. I guess it will decrease by a higher factor
than the replica count.
I assume you mean the 30K IOPS per OSD is what the client sees, if so
the OSD raw disk itself will be doing mor
osd's are crashing when putting a (8GB) file in a erasure coded pool,
just before finishing. The same osd's are used for replicated pools
rbd/cephfs, and seem to do fine. Did I made some error is this a bug?
Looks similar to
https://www.spinics.net/lists/ceph-devel/msg38685.html
http://lists.c
Hey all,
I’m having some trouble setting up a Pool for Erasure Coding. I haven’t found
much documentation around the PG calculation for an Erasure Coding pool. It
seems from what I’ve tried so far that the math needed to set one up is
different than the math you use to calculate PGs for a reg
I finally found an alternative way to try out objclass.
I cloned the repo of Ceph from Github, copied the src/cls/sdk/cls_sdk.cc to
src/cls/test/cls_test.cc, revised src/cls/CMakeLists.txt and used cmake to
compile cls_test.cc in the repo.
I got build/lib/libcls_test.so generated and copied this .s
23 matches
Mail list logo