On Tue, Apr 4, 2017 at 7:09 AM, Ben Morrice wrote:
> Hi all,
>
> We have a weird issue with a few inconsistent PGs. We are running ceph 11.2
> on RHEL7.
>
> As an example inconsistent PG we have:
>
> # rados -p volumes list-inconsistent-obj 4.19
> {"epoch":83986,"inconsistents":[{"object":{"name":
On Tue, Apr 4, 2017 at 2:49 AM, Jens Rosenboom wrote:
> On a busy cluster, I'm seeing a couple of OSDs logging millions of
> lines like this:
>
> 2017-04-04 06:35:18.240136 7f40ff873700 0
> cls/log/cls_log.cc:129: storing entry at
> 1_1491287718.237118_57657708.1
> 2017-04-04 06:35:18.244453 7f4
Here is a background on Ceph striping [1]. By default, RBD will stripe
data with a stripe unit of 4MB and a stripe count of 1. Decreasing the
default RBD image object size will balloon the number of objects in
your backing Ceph cluster but will also result in less data to copy
during snapshot and c
Hi everyone,
Are there any osd or filestore options that operators are tuning for
all-SSD clusters? If so (and they make sense) we'd like to introduce them
as defaults for ssd-backed OSDs.
BlueStore already has different hdd and ssd default values for many
options that it chooses based on the
I don't recall. Perhaps later I can try a test and see.
On Fri, Apr 28, 2017 at 10:22 AM Ali Moeinvaziri wrote:
> Thanks. So, you didn't get any error on command "ceph-deploy mon
> create-initial"?
> -AM
>
>
> On Fri, Apr 28, 2017 at 9:50 AM, Roger Brown
> wrote:
>
>> I used ceph on centos 7.
Thanks. So, you didn't get any error on command "ceph-deploy mon
create-initial"?
-AM
On Fri, Apr 28, 2017 at 9:50 AM, Roger Brown wrote:
> I used ceph on centos 7. I check monitor status with commands like these:
> systemctl status ceph-mon@nuc1
> systemctl stop ceph-mon@nuc1
> systemctl start
I used ceph on centos 7. I check monitor status with commands like these:
systemctl status ceph-mon@nuc1
systemctl stop ceph-mon@nuc1
systemctl start ceph-mon@nuc1
systemctl restart ceph-mon@nuc1
for me, the hostnames are nuc1, nuc2, nuc3 so you have to modify to suit
your case.
On Fri, Apr 28,
Hi,
I'm just trying to install and test cephs with centos 7, which is the
recommended version
over centos 6 (if I read it correctly). However, the scripts seem to be
still tuned for centos6. So
here is the error I get on deploying monitor node:
[ceph_deploy.mon][ERROR ] Failed to execute command:
On 04/28/2017 08:23 AM, Frédéric Nass wrote:
Le 28/04/2017 à 15:19, Frédéric Nass a écrit :
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusion
regarding the performance drop with many smal
You can't have different EC profiles in the same pool either. You have to
create the pool as either a specific EC profile or as Replica. If you
choose EC you can't even change the EC profile later, however you can
change the amount of copies a Replica pool has. An EC pool of 1:1 doesn't
do anyth
On 04/28/2017 02:48 PM, David Turner wrote:
> Wouldn't k=1, m=1 just be replica 2?
Well yes. But Ceph does not support mixing replication and erasure code in the
same pool.
> EC will split the object into k pieces (1)... Ok, that's the whole object.
I was just wondering if jerasure tolerate
Le 28/04/2017 à 15:19, Frédéric Nass a écrit :
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusion
regarding the performance drop with many small objects on bluestore,
whatever the number
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusion
regarding the performance drop with many small objects on bluestore,
whatever the number of PGs is on a pool. Here is the graph I generate
Wouldn't k=1, m=1 just be replica 2? EC will split the object into k pieces
(1)... Ok, that's the whole object. And then you want to be able to lose m
copies of the object (1)... Ok, that's an entire copy of that whole
object. That isn't erasure coding, that is full 2 copy replication. For
erasure
Hello again,
I can work around this issue. If the host header is an IP address, the
request is treated as a virtual:
So if I auth to to my backends via IP, things work as expected.
Kind regards,
Ben Morrice
__
Ben Morrice |
Hello Radek,
Thanks again for your anaylsis.
I can confirm on 10.2.7, if I remove the conf "rgw dns name" I can auth
to directly to the radosgw host.
In our environment we terminate SSL and route connections via haproxy,
but it's still sometimes useful to be able to communicate directly to
16 matches
Mail list logo