On Mon, Jan 31, 2022 at 5:58 PM Anmol Arora wrote:
>
> Hi,
> I'm using cephfs as a storage layer for a database.
> And seeing the following message in the health warning of ceph-
> ```
> # ceph health detail
> HEALTH_WARN 1 clients failing to respond to capability release
> [WRN] MDS_CLIENT_LATE_R
Hi Chris,
On Thu, Dec 9, 2021 at 10:40 AM Chris Palmer wrote:
>
> Hi
>
> I've just started an upgrade of a test cluster from 16.2.6 -> 16.2.7 and
> immediately hit a problem.
>
> The cluster started as octopus, and has upgraded through to 16.2.6
> without any trouble. It is a conventional deploym
Hello Gregory,
Thanks for your input.
* Ceph may not have the performance ceiling you're looking for. A
> write IO takes about half a millisecond of CPU time, which used to be
> very fast and is now pretty slow compared to an NVMe device. Crimson
> will reduce this but is not ready for real users
There's a lot going on here. Some things I noticed you should be aware
of in relation to the tests you performed:
* Ceph may not have the performance ceiling you're looking for. A
write IO takes about half a millisecond of CPU time, which used to be
very fast and is now pretty slow compared to an
Hi Arun,
Not sure exactly how things got this way. When you provide "--image
" when bootstrapping that should set the image to be used for
all ceph containers. I've never seen just the bootstrap mgr/mon get a
totally different image. Would be interesting to maybe see the full
bootstrap output here
>
>
> SDS is not just about performance. You want something reliable for
> the next 10(?) years, the more data you have the more this is going to be an
> issue. For me it is important that organisations like CERN and NASA are using
> it. If you look at this incident with the 'bug of the yea
On Mon, Jan 31, 2022 at 5:07 PM Frank Schilder wrote:
>
> Hi all,
>
> we observed server crashes with these possibly related error messages in the
> log showing up:
>
> Jan 26 10:07:53 sn180 kernel: kernel BUG at include/linux/ceph/decode.h:262!
> Jan 25 23:33:47 sn319 kernel: kernel BUG at inclu
Hello,
As an update: we were able to clear the queue by repeering all PGs
which had outstanding entries in their snaptrim queues. After this
process completed and we confirmed that no PGs remained with non-zero
length queues, we re-enabled our snapshot schedule. Several days have
now passed and
Hi All,
How can change the default behaviour of cepham to use stable container
images instead of default latest/devel images.
By default when we try to bootstrap a cluster and add two additional hosts
after bootstrap finished, dameons are created on two container images.
Which are, *quay.io/ceph/
Hey,
SDS is not just about performance. You want something reliable for the next
> 10(?) years, the more data you have the more this is going to be an issue.
> For me it is important that organisations like CERN and NASA are using it.
> If you look at this incident with the 'bug of the year' then
Thanks a lot Guys for you answers.
One question about OMAP. I see that "after the upgrade, the first time each
OSD starts, it will do a format conversion to improve the accounting for “omap”
data. It may take a few minutes or up to a few hours"
Is there any way to check/control this proc
Hi,
I'm using cephfs as a storage layer for a database.
And seeing the following message in the health warning of ceph-
```
# ceph health detail
HEALTH_WARN 1 clients failing to respond to capability release
[WRN] MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability
release
mds.c
You can upgrade straight from Mimic to Octopus without a full outage window.
One thing to keep in mind that wasn't spelled out clearly in the docs (I
thought): you will have to bounce all components twice to get the full
benefits of MSGRv2. After you bounce everything in the correct order and
run
Hey all,
For more that I'm enjoying this discussion, it's completely out of my
original question:
How to stop the automatic OSD creation from Ceph orchestrator?
The problem happens because using cinderlib, ovirt uses krbd (not librbd)
and because of this, the kernel
Dear Cephers,
We are planning the upgrade of out Ceph cluster, version Mimic 13.2.10. (3Mons,
3MGRs, 181OSD, 2MSDs, 2 RGW)
Cluster is healthy and all the pools are running size 3 min_size 2.
This is an old cluster implementation that has been upgraded from firefly
(There are still a clouple OS
Am 28.01.22 um 21:09 schrieb Manuel Holtgrewe:
What is the overall process of reinstalling (e.g., for going from
enterprise linux 7 to 8) and getting my OSDs back afterwards.
- reinstall operating system on system disk
- install cephadm binary
- ... now what? ;-)
You need to add the cephadm S
Hi,
> On 31 Jan 2022, at 11:38, Marc wrote:
>
> This is incorrect. I am using live migration with Nautilus and stock kernel
> on CentOS7
Mark, I think that you are confusing live migration of virtual machines [1] and
live migration of RBD images [2] inside the cluster (between pools, for
ex
>
> > On 31 Jan 2022, at 00:53, Nir Soffer wrote:
> >
> > Live migration and snapshots are not available? This is news to me.
> >
>
>
> Welcome to krbd world.
This is incorrect. I am using live migration with Nautilus and stock kernel on
CentOS7
___
18 matches
Mail list logo