Hello,
we're testing KVM on CentOS 7 as Ceph (luminous) client.
CentOS 7 has a librbd package in its base repository with version 0.94.5
the question is (aside from feature support) if we should install a
recent librbd from the ceph repositories (12.2.x) or stay with the
default one.
my main conc
r
wolfgang
--
Wolfgang Lendl
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21231
Fax: +43 1 40160-921200
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
ould improve
> performance.
>
> I have a blog post that we've been working on that explores some of
> these things but I'm still waiting on review before I publish it.
>
> Mark
>
> On 11/08/2017 05:53 AM, Wolfgang Lendl wrote:
>> Hello,
>>
>> it'
hello,
I'm looking for a recommendation about what parts/configuration/etc to
backup from a ceph cluster in case of a disaster.
I know this depends heavily on the type of disaster and I'm not talking
about backup of payload stored on osds.
currently I have my admin key stored somewhere outside th
hi,
I'm a bit confused after reading the official ceph docu regarding QEMU
and rbd caching.
http://docs.ceph.com/docs/master/rbd/qemu-rbd/?highlight=qemu
there's a big fat warning:
"Important: If you set rbd_cache=true, you must set cache=writeback or
risk data loss. Without cache=writeback, Q
Hi,
after upgrading my ceph clusters from 12.2.5 to 12.2.7 I'm experiencing random
crashes from SSD OSDs (bluestore) - it seems that HDD OSDs are not affected.
I destroyed and recreated some of the SSD OSDs which seemed to help.
this happens on centos 7.5 (different kernels tested)
/var/log/m
Hi Alfredo,
caught some logs:
https://pastebin.com/b3URiA7p
br
wolfgang
On 2018-08-29 15:51, Alfredo Deza wrote:
> On Wed, Aug 29, 2018 at 2:06 AM, Wolfgang Lendl
> wrote:
>> Hi,
>>
>> after upgrading my ceph clusters from 12.2.5 to 12.2.7 I'm experiencing
&g
is downgrading from 12.2.7 to 12.2.5 an option? - I'm still suffering
from high frequent osd crashes.
my hopes are with 12.2.9 - but hope wasn't always my best strategy
br
wolfgang
On 2018-08-30 19:18, Alfredo Deza wrote:
> On Thu, Aug 30, 2018 at 5:24 AM, Wolfgang Lendl
> wrot
i,
>
> These reports are kind of worrying since we have a 12.2.5 cluster too
> waiting to upgrade. Did you have a luck with upgrading to 12.2.8 or
> still the same behavior?
> Is there a bugtracker for this issue?
>
> Kind regards,
> Caspar
>
> Op di 4 sep. 2018 om 09:59 sc
nski
>
> [1] http://tracker.ceph.com/issues/25001
> [2] http://tracker.ceph.com/issues/24211
> [3] http://tracker.ceph.com/issues/25001#note-6
>
> On Tue, Sep 4, 2018 at 12:54 PM, Alfredo Deza wrote:
>> On Tue, Sep 4, 2018 at 3:59 AM, Wolfgang Lendl
>> wrote:
>>
hi,
I have no idea what "w=8" means and can't find any hints in docs ...
maybe someone can explain
ceph 12.2.2
# ceph osd erasure-code-profile get ec42
crush-device-class=hdd
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_s
dear all,
i have a luminious cluster with tunables profile "hammer" - now all my
hammer clients are gone and i could raise the tunables level to "jewel".
is there any good way to predict the data movement caused by such a
config change?
br
wolfgang
smime.p7s
Description: S/MIME Cryptograp
Hi,
tried to enable the ceph balancer on a 12.2.12 cluster and got this:
mgr[balancer] Some osds belong to multiple subtrees: [0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 4
ew
problems in older versions, lots of them have been fixed in backports.
The upmap balancer is much better than the crush-compat balancer, but
it requires all clients to run Luminous or later.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
14 matches
Mail list logo