On Tue, 6 Feb 2018 13:24:18 +0530 Karun Josy wrote:
> Hi Christian,
>
> Thank you for your help.
>
> Ceph version is 12.2.2. So is this value bad ? Do you have any suggestions ?
>
That should be fine AFAIK, some (all?) versions for Jewel definitely are
not.
>
> ceph tell osd.* injectargs '
On Tue, 6 Feb 2018 13:27:22 +0530 Karun Josy wrote:
> Hi Christian,
>
> Thank you for your help.
>
> Ceph version is 12.2.2. So is this value bad ? Do you have any suggestions ?
>
>
> So to reduce the max chunk ,I assume I can choose something like
> 7 << 20 ,ie 7340032 ?
>
More like 4MB to
Just to add -
We wrote a little wrapper, that reads the output of "radosgw-admin usage
show" and stops, when the loop starts. When we add all entries by
ourselves, the result is correct. Moreover - the duplicate timestamp, that
we detect to break the loop, is not the last taken into account. Eg:
On 02/06/2018 04:03 AM, Christian Balzer wrote:
> Hello,
>
> On Mon, 5 Feb 2018 22:04:00 +0100 Tobias Kropf wrote:
>
>> Hi ceph list,
>>
>> we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph
>> hammer 0.94.10.
> Do I smell Proxmox?
Yes we use atm Proxmox
>
>> The cluster is now
Hello,
On Tue, 6 Feb 2018 09:21:22 +0100 Tobias Kropf wrote:
> On 02/06/2018 04:03 AM, Christian Balzer wrote:
> > Hello,
> >
> > On Mon, 5 Feb 2018 22:04:00 +0100 Tobias Kropf wrote:
> >
> >> Hi ceph list,
> >>
> >> we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph
> >> ham
Hello! My cluster uses two networks.
in ceph.conf there are two records: public_network = 10.53.8.0/24,
cluster_network = 10.0.0.0/24
Servers and clients are connected to one switch.
To store data in ceph from clients, use cephfs:
10.53.8.141:6789,10.53.8.143:6789,10.53.8.144:6789:/ on / mnt
Dear all,
we finally found the reason for the unexpected growth in our cluster.
The data was created by a collectd plugin [1] that measures latency by
running rados bench once a minute. Since our cluster was stressed out
for a while, removing the objects created by rados bench failed. We
comple
Ah! Right, I guess my actual question was:
How does osd crush chooseleaf type = 0 and 1 alter the crushmap?
By experimentation I've figured out that:
"osd crush chooseleaf type = 0" turns into "step choose firstn 0 type
osd" and
"osd crush chooseleaf type = 1" turns into "step chooseleaf fi
Hi all,
I had the idea to use a RBD device as the SBD device for a pacemaker
cluster. So I don't have to fiddle with multipathing and all that stuff.
Have someone already tested this somewhere and can tell how the cluster
reacts on this?
I think this shouldn't be problem, but I'm just wondering i
On 02/06/2018 01:00 PM, Kai Wagner wrote:
Hi all,
I had the idea to use a RBD device as the SBD device for a pacemaker
cluster. So I don't have to fiddle with multipathing and all that stuff.
Have someone already tested this somewhere and can tell how the cluster
reacts on this?
I think this
Hi Frederic,
I've not enable debug level logging on all OSDs, just on one for the test,
need to double check that.
But looks that merging is ongoing on few OSDs or OSDs are faulty, I will
dig into that tomorrow.
Write bandwidth is very random
# rados bench -p default.rgw.buckets.data 120 write
h
Hi Jakub,
Le 06/02/2018 à 16:03, Jakub Jaszewski a écrit :
Hi Frederic,
I've not enable debug level logging on all OSDs, just on one for the
test, need to double check that.
But looks that merging is ongoing on few OSDs or OSDs are faulty, I
will dig into that tomorrow.
Write bandwidth is
On 2018-02-06T13:00:59, Kai Wagner wrote:
> I had the idea to use a RBD device as the SBD device for a pacemaker
> cluster. So I don't have to fiddle with multipathing and all that stuff.
> Have someone already tested this somewhere and can tell how the cluster
> reacts on this?
SBD should work
Hello Ceph users. Is object lifecycle (currently expiration) for rgw
implementable on a per-object basis, or is the smallest scope the bucket?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
We had a 26-node production ceph cluster which we upgraded to Luminous a
little over a month ago. I added a 27th-node with Bluestore and didn't have
any issues, so I began converting the others, one at a time. The first two
went off pretty smoothly, but the 3rd is doing something strange.
Initiall
On Mon, Feb 5, 2018 at 9:08 AM, Keane Wolter wrote:
> Hi Patrick,
>
> Thanks for the info. Looking at the fuse options in the man page, I should
> be able to pass "-o uid=$(id -u)" at the end of the ceph-fuse command.
> However, when I do, it returns with an unknown option for fuse and
> segfaults
16 matches
Mail list logo