Ceph still surprise me, when i'm sure i've fully understood it,
something 'strange' (to my knowledge) happen.
I need to move out a server of my ceph hammer cluster (3 nodes, 4 OSD
per node), and for some reasons i cannot simply move disks.
So i've added a new node, and yesterday i've setup the
Hi Marco,
On 11/22/18 9:22 AM, Marco Gaiarin wrote:
>
> ...
> But, despite of the fact that weight is zero, rebalance happen, and
> using percentage of rebalance 'weighted' to the size of new disk (eg,
> i've had 18TB circa of space, i've added a 2TB disks and roughly 10% of
> data start to rebal
Hi Florian,
On 11/21/2018 7:01 PM, Florian Engelmann wrote:
Hi Igor,
sad to say but I failed building the tool. I tried to build the whole
project like documented here:
http://docs.ceph.com/docs/mimic/install/build-ceph/
But as my workstation is running Ubuntu the binary fails on SLES:
./
Mandi! Paweł Sadowsk
In chel di` si favelave...
> We did similar changes a many times and it always behave as expected.
Ok. Good.
> Can you show you crushmap/ceph osd tree?
Sure!
root@blackpanther:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 2
On Thu, 22 Nov 2018 12:05:12 +0100
Marco Gaiarin wrote:
> Mandi! Paweł Sadowsk
> In chel di` si favelave...
>
> > We did similar changes a many times and it always behave as
> > expected.
>
> Ok. Good.
>
> > Can you show you crushmap/ceph osd tree?
>
> Sure!
>
> root@blackpanther:~# c
The reason for the rebalance is you are using straw algorithms. If you swift
to straw2, no data will be moved.
From: ceph-users on behalf of Jarek
Sent: Thursday, November 22, 2018 19:22
To: Marco Gaiarin
Cc: ceph-users@lists.ceph.com
Subject: Re: [cep
Hi,
The ceph.com ceph luminous packages for Ubuntu Bionic still depend on
libcurl3 (specifically ceph-common, radosgw. librgw2 all depend on
libcurl3 (>= 7.28.0)).
This means that anything that depends on libcurl4 (which is the default
libcurl in bionic) isn't co-installable with ceph. That
Bionic's mimic packages do seem to depend on libcurl4 already, for what
that's worth:
root@vm-gw-1:/# apt-cache depends ceph-common
ceph-common
...
Depends: libcurl4
On 22/11/2018 12:40, Matthew Vernon wrote:
> Hi,
>
> The ceph.com ceph luminous packages for Ubuntu Bionic still depend on
> li
Mandi! Zongyou Yao
In chel di` si favelave...
> The reason for the rebalance is you are using straw algorithms. If you swift
> to straw2, no data will be moved.
I'm still on hammer, so:
http://docs.ceph.com/docs/hammer/rados/operations/crush-map/
seems there's no 'staw2'...
--
dot
We've encountered the same problem on Debian Buster
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Am Do., 22. Nov. 2018 um 13:58 Uhr schrieb Richard Hesketh
:
>
> Bio
On 19/11/2018 16:23, Florian Haas wrote:
> Hi everyone,
>
> I've recently started a documentation patch to better explain Swift
> compatibility and OpenStack integration for radosgw; a WIP PR is at
> https://github.com/ceph/ceph/pull/25056/. I have, however, run into an
> issue that I would really
On 11/22/18 12:22 PM, Jarek wrote:
> On Thu, 22 Nov 2018 12:05:12 +0100
> Marco Gaiarin wrote:
>
>> Mandi! Paweł Sadowsk
>> In chel di` si favelave...
>>
>>> We did similar changes a many times and it always behave as
>>> expected.
>>
>> Ok. Good.
>>
>>> Can you show you crushmap/ceph osd tre
Hello dear ceph users:
We are running a ceph cluster with Luminous (BlueStore). As you may know
this new ceph version has a new feature called "Checksums". I would like
to ask if this feature replace to deep-scrub. In our cluster, we run
deep-scrub ever month however the impact in the performanc
Mandi! Paweł Sadowsk
In chel di` si favelave...
> From your osd tree it looks like you used 'ceph osd reweight'.
Yes, and i supposed also to do the right things!
Now, i've tried to lower the to-dimissi OSD, using:
ceph osd reweight 2 0.95
leading to an osd map tree like:
root@blackp
On 22/11/2018 13:40, Paul Emmerich wrote:
We've encountered the same problem on Debian Buster
It looks to me like this could be fixed simply by building the Bionic
packages in a Bionic chroot (ditto Buster); maybe that could be done in
future? Given I think the packaging process is being revi
Hi,
I'm looking example Ceph configuration and topology on full layer 3
networking deployment. Maybe all daemons can use loopback alias address in
this case. But how to set cluster network and public network configuration,
using supernet? I think using loopback alias address can prevent the
daemon
On 11/22/18 6:12 PM, Marco Gaiarin wrote:
Mandi! Paweł Sadowsk
In chel di` si favelave...
From your osd tree it looks like you used 'ceph osd reweight'.
Yes, and i supposed also to do the right things!
Now, i've tried to lower the to-dimissi OSD, using:
ceph osd reweight 2 0.95
l
Sorry for hijacking a thread but do you have an idea of what to watch for:
I monitor admin sockets of osds and occasionally I see a burst of both
op_w_process_latency and op_w_latency to near 150 - 200 ms on 7200 SAS
enterprise drives.
For example load average on the node jumps up with idle 97
On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote:
> I'm looking example Ceph configuration and topology on full layer 3
> networking deployment. Maybe all daemons can use loopback alias address in
> this case. But how to set cluster network and public network configuration,
> using
19 matches
Mail list logo