On 03/03/2021 00:55, Lincoln Bryant wrote:
Hi list,
We recently had a cluster outage over the weekend where several OSDs were
inaccessible over night for several hours. When I found the cluster in the
morning, the monitors' root disks (which contained both the monitor's leveldb
and the Cep
Hi,
if the host fails, to which the grafana-api-url points (in the example
below ceph01.hostxyz.tld:3000), Ceph Dashboard can't Display Grafana Data:
# ceph dashboard get-grafana-api-url
https://ceph01.hostxyz.tld:3000
Is it possible to automagically switch to an other host?
Thanks, Erich
Hi Norman
On Wed, Mar 3, 2021 at 2:47 AM Norman.Kern wrote:
> James,
>
> Can you tell me what's the hardware config of your bcache? I use the 400G
> SATA SSD as cache device and
>
> 10T HDD as the storage device. Hardware relationed?
>
It might be - all of the deployments I've seen/worked with
On 02/03/2021 16:38, Matthew Vernon wrote:
root@sto-t1-1:~# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average; 9 pgs
not deep-scrubbed in time
[WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than
average
pool default.rgw.buckets.data object
Slow mon sync can be caused by too large mon_sync_max_payload_size. The default
is usually way too high. I had sync problems until I set
mon_sync_max_payload_size = 4096
Since then mon sync is not an issue any more.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum
On Wed, Mar 3, 2021 at 11:15 AM Stefan Kooman wrote:
>
> On 3/2/21 6:00 PM, Jeff Layton wrote:
>
> >>
> >>>
> >>> v2 support in the kernel is keyed on the ms_mode= mount option, so that
> >>> has to be passed in if you're connecting to a v2 port. Until the mount
> >>> helpers get support for that
Indeed. That is going to be fixed by
https://github.com/ceph/ceph/pull/39633
Am 03.03.21 um 07:31 schrieb Philip Brown:
> Seems like someone is not testing cephadm on centos 7.9
>
> Just tried installing cephadm from the repo, and ran
> cephadm bootstrap --mon-ip=xxx
>
> it blew up, with
>
>
Hi,
Assuming a cluster (currently octopus, might upgrade to pacific once
released) serving only CephFS and that only to a handful of kernel and
fuse-clients (no OpenStack, CSI or similar): Are there any side effects
of not using the ceph-mgr volumes module abstractions [1], namely
subvolumes
On 3/2/21 6:00 PM, Jeff Layton wrote:
v2 support in the kernel is keyed on the ms_mode= mount option, so that
has to be passed in if you're connecting to a v2 port. Until the mount
helpers get support for that option you'll need to specify the address
and port manually if you want to use v2.
Hi all,
Thanks for the responses.
I stopped the monitor that wasn't syncing and dumped keys with the
monstoretool. The keys seemed to mostly be of type 'logm' which I guess matches
up with the huge amount of log messages I was getting about slow ops. I tried
injecting clog_to_monitor=false alo
Howdy,
After the IBM acquisition of RedHat the landscape for CentOS quickly changed.
As I understand it right now Ceph 14 is the last version that will run on
CentOS/EL7 but CentOS8 was "killed off".
So given that, if you were going to build a Ceph cluster today would you even
bother doing it
I would use croit
From: Drew Weaver
Date: Wednesday, March 3, 2021 at 7:45 AM
To: 'ceph-users@ceph.io'
Subject: [ceph-users] Questions RE: Ceph/CentOS/IBM
Howdy,
After the IBM acquisition of RedHat the landscape for CentOS quickly changed.
As I understand it right now Ceph 14 is the last versi
Hi,
You can get support for running Ceph on a number of distributions - RH
support both RHEL and Ubuntu, Canonical support Ubuntu, the smaller
consultancies seem happy to support anything plausible (e.g. Debian),
this mailing list will opine regardless of what distro you're running ;-)
Regar
Hi,
I guess you can use a load balancer like HAProxy + keepalived to make the
api high available and point the dashboard to the VIP. Of course, you need
to deploy more than one grafana instance.
Thanks,
Vladimir
On Wed, Mar 3, 2021 at 5:07 AM E Taka <0eta...@gmail.com> wrote:
> Hi,
>
> if the ho
On Wed, Mar 3, 2021 at 7:45 AM Drew Weaver wrote:
> Howdy,
>
> After the IBM acquisition of RedHat the landscape for CentOS quickly
> changed.
>
> As I understand it right now Ceph 14 is the last version that will run on
> CentOS/EL7 but CentOS8 was "killed off".
>
> So given that, if you were go
On 3/3/21 1:16 PM, Ilya Dryomov wrote:
I have tested with 5.11 kernel (5.11.2-arch1-1 #1 SMP PREEMPT Fri, 26
Feb 2021 18:26:41 + x86_64 GNU/Linux) port 3300 and ms_mode=crc as
well as ms_mode=prefer-crc and that works when cluster is running with
ms_bind_ipv4=false. So the "fix" is to have
ср, 3 мар. 2021 г. в 20:45, Drew Weaver :
>
> Howdy,
>
> After the IBM acquisition of RedHat the landscape for CentOS quickly changed.
>
> As I understand it right now Ceph 14 is the last version that will run on
> CentOS/EL7 but CentOS8 was "killed off".
This is wrong. Ceph 15 runs on CentOS 7 j
> > Secondly, are we expecting IBM to "kill off" Ceph as well?
> >
> Stop spreading rumors! really! one can take it further and say kill
> product
> x, y, z until none exist!
>
This natural / logical thinking, the only one to blame here is IBM/redhat. If
you have no regards for maintaining the r
+1
On 3.3.2021 г. 11:37 ч., Marc wrote:
Secondly, are we expecting IBM to "kill off" Ceph as well?
Stop spreading rumors! really! one can take it further and say kill
product
x, y, z until none exist!
This natural / logical thinking, the only one to blame here is IBM/redhat. If
you have no
> This is wrong. Ceph 15 runs on CentOS 7 just fine, but without the
> dashboard.
>
I also hope that ceph is keeping support for el7 till it is eol in 2024. So I
have enough time to figure out what OS to choose.
___
ceph-users mailing list -- ceph-user
I’m at something of a loss to understand all the panic here.
Unless I’ve misinterpreted, CentOS isn’t killed, it’s being updated more
frequently. Want something stable? Freeze a repository into a local copy, and
deploy off of that. Like we all should be doing anyway, vs. relying on
slurping
On Wed, Mar 3, 2021 at 8:46 AM Marc wrote:
> > > Secondly, are we expecting IBM to "kill off" Ceph as well?
> > >
> > Stop spreading rumors! really! one can take it further and say kill
> > product
> > x, y, z until none exist!
> >
>
> This natural / logical thinking, the only one to blame here i
Just go for CentOS stream it will be at least as stable as CentOS and
probably even more.
CentOS Stream is just the next minor version of the current RHEL minor
which means it already contains fixes not yet released for RHEL but
available for CentOS stream. It is not as if CentOS stream would be a
Hi Matthew,
Starting of Ceph 4, RH does only support RHEL 7.x & 8.1. Ubuntu support has
been deprecated
Regards
On Wed, Mar 3, 2021 at 5:19 PM Matthew Vernon wrote:
> Hi,
>
> You can get support for running Ceph on a number of distributions - RH
> support both RHEL and Ubuntu, Canonical suppor
> As I understand it right now Ceph 14 is the last version that will run on
> CentOS/EL7 but CentOS8 was "killed off".
>This is wrong. Ceph 15 runs on CentOS 7 just fine, but without the dashboard.
Oh, what I should have said is that I want it to be fully functional.
On Wed, Mar 3, 2021 at 5:49 AM Sebastian Knust
wrote:
>
> Hi,
>
> Assuming a cluster (currently octopus, might upgrade to pacific once
> released) serving only CephFS and that only to a handful of kernel and
> fuse-clients (no OpenStack, CSI or similar): Are there any side effects
> of not using t
On 3/3/21 10:45 AM, Drew Weaver wrote:
> Howdy,
>
> After the IBM acquisition of RedHat the landscape for CentOS quickly changed.
>
> As I understand it right now Ceph 14 is the last version that will run on
> CentOS/EL7 but CentOS8 was "killed off".
>
> So given that, if you were going to buil
Hi Matthew,
my colleagues and I can still remember that the values do not change
automatically when you upgrade.
I remember performance problems after an upgrade with old tunables a few
years ago.
But such behaviour may change with the next version.
Meanwhile you get a warning in ceph status
On 3/3/21 10:37 AM, Marc wrote:
Secondly, are we expecting IBM to "kill off" Ceph as well?
Stop spreading rumors! really! one can take it further and say kill
product
x, y, z until none exist!
This natural / logical thinking, the only one to blame here is IBM/redhat. If
you have no regards
On 3/3/21 1:16 PM, Ilya Dryomov wrote:
Sure. You are correct that the kernel client needs a bit a work as we
haven't considered dual stack configurations there at all.
https://tracker.ceph.com/issues/49581
Gr. Stefan
___
ceph-users mailing list --
On 3/3/21 1:16 PM, Ilya Dryomov wrote:
And from this documentation:
https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#ipv4-ipv6-dual-stack-mode
we learned that dual stack is not possible for any current stable
release, but might be possible with latest code. So the takeawa
Unfortunately, nothing like this exists in RADOS. It can't really --
scaling is inimical to the sort of data collation you seem to be
looking for. If you use librados, you need to maintain all your own
metadata. RGW has done a lot of work to support these features;
depending on what you need you ma
Hello Mattew,
I agree with you.
We have been running Ceph clusters on debian, centos, enterprise suse
linux, redhat, opensuse, gardenlinux and whatever is LSB compliant for
the last 9 years.
I think the trend towards containers further decouples it from Linux
distributions.
Regards, Joac
I have been told that Rocky Linux is a fork of CentOS that will be what
CentOS used to be before this all happened. I'm not sure how that figures
in here, but it's worth knowing.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Wed, Mar 3, 2021 at 12:41 PM Drew Weaver wrote
On Wed, Mar 3, 2021 at 9:20 AM Teoman Onay wrote:
> Just go for CentOS stream it will be at least as stable as CentOS and
> probably even more.
>
> CentOS Stream is just the next minor version of the current RHEL minor
> which means it already contains fixes not yet released for RHEL but
> availa
But are you using kernel 4 then with centos 7?
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---
-Original Message-
From: Marc
Sent: Wednes
36 matches
Mail list logo