On Thu, May 16, 2019 at 3:55 PM Mark Lehrer wrote:
> > Steps 3-6 are to get the drive lvm volume back
>
> How much longer will we have to deal with LVM? If we can migrate non-LVM
> drives from earlier versions, how about we give ceph-volume the ability to
> create non-LVM OSDs directly?
>
We ar
Default is k+1 or k if m == 1
min_size = k is unsafe and should never be set.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sun, May 19, 2019 at 11:31 AM Florent
Check out the log of the primary OSD in that PG to see what happened during
scrubbing
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sun, May 19, 2019 at 12:41 AM Jorge
https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
-Original Message-
From: Florent B [mailto:flor...@coppint.com]
Sent: zondag 19 mei 2019 12:06
To: Paul Emmerich
Cc: Ceph Users
Subject: Re: [ceph-users] Default min_size value for EC pools
Thank you Paul for your ans
I ended up taking Brett's recommendation and doing a "ceph osd set noscrub"
and "ceph osd set nodeep-scrub", then waiting for the running scrubs to
finish which doing a "ceph -w" to see what it was doing. Eventually, it
reported the following:
2019-05-18 16:08:44.032780 mon.gi-cba-01 [ERR] Health
Dear ceph community members,
We have a ceph cluster (mimic 13.2.4) with 7 nodes and 130+ OSDs. However, we
observed over 70 millions active TCP connections on the radosgw host, which
makes the radosgw very unstable.
After further investigation, we found most of the TCP connections on the
rado
I found similar behaviour on a Nautilus cluster on Friday. Around 300 000
open connections which I think were the result of a benchmarking run which
was terminated. I restarted the radosgw service to get rid of them.
On Mon, 20 May 2019 at 06:56, Li Wang wrote:
> Dear ceph community members,
>
>
Hello,
on a ceph Nautilus cluster (14.2.1) running on Ubuntu 18.04 I try to set
up rbd images with namespaces in order to allow different clients to
access only their "own" rbd images in different namespaces in just one
pool. The rbd image data are in an erasure encoded pool named "ecpool"
and the