Hi everyone,
This month's Ceph User + Dev Monthly meetup is on December 16, 2021,
15:00-16:00 UTC. Please add topics you'd like to discuss in the agenda
here: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
__
All done. I restarted one MON after removing the sanity option just to
be sure, and it's fine.
Thanks again for your help.
Chris
On 09/12/2021 18:38, Dan van der Ster wrote:
Hi,
Good to know, thanks.
Yes, you need to restart a daemon to undo a change applied via ceph.conf.
You can check exac
In case anyone is interested; I hacked up some more perl code to parse
the tree output of crushtool to use the actual info from the new
crushmap, instead of the production info from ceph itself.
See: https://gist.github.com/pooh22/53960df4744efd9d7e0261ff92e7e8f4
Cheers
/Simon
On 02/12/2021
Hi Greg,
Much appreciated for the reply, the image is also available at:
https://tracker.ceph.com/attachments/download/5808/Bytes_per_op.png
How the graph is generated: we back the cephfs metadata pool with Azure
ultrassd disks. Azure reports for the disks each minute the average
read/write iops
Hi,
Good to know, thanks.
Yes, you need to restart a daemon to undo a change applied via ceph.conf.
You can check exactly which config is currently used and where the
setting comes from using (running directly on the mon host):
ceph daemon mon.`hostname -s` config diff
The mons which had the s
Hi
Yes, using ceph config is working fine for the rest of the nodes.
Do you know if it is necessary/advisable to restart the MONs after
removing the mon_mds_skip_sanity setting when the upgrade is complete?
Thanks, Chris
On 09/12/2021 17:51, Dan van der Ster wrote:
Hi,
On Thu, Dec 9, 2021
Hi,
On Thu, Dec 9, 2021 at 6:44 PM Chris Palmer wrote:
>
> Hi Dan & Patrick
>
> Setting that to true using "ceph config" didn't seem to work. I then
> deleted it from there and set it in ceph.conf on node1 and eventually
> after a reboot it started ok. I don't know for sure whether it failing
> u
Hi Dan & Patrick
Setting that to true using "ceph config" didn't seem to work. I then
deleted it from there and set it in ceph.conf on node1 and eventually
after a reboot it started ok. I don't know for sure whether it failing
using ceph config was real or just a symptom of something else.
I
After reading my mail it may not be clear that i reinstalled the OS of
a node with OSDs.
On Thu, 2021-12-09 at 18:10 +0100, bbk wrote:
> Hi,
>
> the last time i have reinstalled a node with OSDs, i added the disks
> with the following command. But unfortunatly this time i ran into a
> error.
>
>
On Thu, Dec 9, 2021 at 5:40 PM Patrick Donnelly wrote:
>
> Hi Chris,
>
> On Thu, Dec 9, 2021 at 10:40 AM Chris Palmer wrote:
> >
> > Hi
> >
> > I've just started an upgrade of a test cluster from 16.2.6 -> 16.2.7 and
> > immediately hit a problem.
> >
> > The cluster started as octopus, and has u
Hi,
the last time i have reinstalled a node with OSDs, i added the disks with the
following command. But unfortunatly this time i ran into a error.
It seems like this time the command doesn't create the container, i am able to
run `cephadm shell`, and other daemons (mon,mgr,mds) are running.
I
Am 09.12.21 um 16:25 schrieb Marco Pizzolo:
I would be interested to know if anyone else has contemplated or performed
something similar, and what their findings were.
I have done this on a test cluster running Ubuntu 20.04 and it kind of
just worked. Sometimes I had to stop the docker contai
Hi all,
The release notes are missing an upgrade step that is needed only for
clusters *not* managed by cephadm.
This was noticed in
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/7KSPSUE4VO274H5XQYNFCT7HKWT75BCY/
If you are not using cephadm, you must disable FSMap sanity checks
Hi Dan
Here it is
Thanks, Chris
root@tstmon01:/var/log/ceph# ceph fs dump
e254
enable_multiple, ever_enabled_multiple: 0,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses
v
Hi Chris,
On Thu, Dec 9, 2021 at 10:40 AM Chris Palmer wrote:
>
> Hi
>
> I've just started an upgrade of a test cluster from 16.2.6 -> 16.2.7 and
> immediately hit a problem.
>
> The cluster started as octopus, and has upgraded through to 16.2.6
> without any trouble. It is a conventional deploym
Hi,
This is clearly not expected. I pinged cephfs devs on IRC.
Could you please share output of `ceph fs dump`
-- dan
On Thu, Dec 9, 2021 at 4:40 PM Chris Palmer wrote:
>
> Hi
>
> I've just started an upgrade of a test cluster from 16.2.6 -> 16.2.7 and
> immediately hit a problem.
>
> The clus
From what I can gather, this will not be smooth at all, since I can't
make an inplace upgrade of the
OS first and then Ceph and neither other way around.
I think easier would be to upgrade one node at a time from centos7 to ... +
nautilus. And when that is done do the upgrade to pacific.
My
Hi
I've just started an upgrade of a test cluster from 16.2.6 -> 16.2.7 and
immediately hit a problem.
The cluster started as octopus, and has upgraded through to 16.2.6
without any trouble. It is a conventional deployment on Debian 10, NOT
using cephadm. All was clean before the upgrade. It
Hello Everyone,
In an attempt to futureproof, I am beginning to look for information on how
one would go about moving to podman from docker on a cephadm 16.2.6
installation on ubuntu 20.04.3.
I would be interested to know if anyone else has contemplated or performed
something similar, and what th
Hi,
Thank you so much for your kind information. We will review the setting.
One things, If we want to use ssd replica size=2
As failure domain is host, it should make sure two replica in two
different host,
Is there any drawback?
Regards,
Munna
On Thu, 9 Dec 2021, 20:35 Stefan Kooman, wr
sorry, please ee below:
cephadm shell
ceph status -> does not respond
ceph-volume lvm activate --all
root@ceph01 /usr/bin # cephadm shell
Inferring fsid 7131bb42-7f7a-11eb-9b5e-0c9d92c47572
Inferring config
/var/lib/ceph/7131bb42-7f7a-11eb-9b5e-0c9d92c47572/mon.ceph01/config
Using recent ceph imag
Andras,
Unfortunately your attachment didn't come through the list. (It might
work if you embed it inline? Not sure.) I don't know if anybody's
looked too hard at this before, and without the image I don't know
exactly what metric you're using to say something's 320KB in size. Can
you explain more
Thanks Boris, for your prompt response and assistance.
We didnt set this cluster up. Most probably ceph-volume is not used,
because we only have ceph-admin. there is no "ceph" and "ceph-volume"
available (at least doesnt come with auto / bash complete). Probably all
things gone through the GUI aft
Hi Soan,
does `ceph status` work?
Did you use ceph-volume to initially create the OSDs (we only use this tool
and create LVM OSDs)? If yes, you might bring the OSDs back up with
`ceph-volume lvm activate --all`
Cheers
Boris
Am Do., 9. Dez. 2021 um 13:48 Uhr schrieb Mini Serve :
> Hi,
> We had
Hi,
We had 3 node cluster ceph installation.
One of them - node-3, had system failure (OS boot disk failure) so OS is
reinstalled. other physical drives, where OSDs are just fine. We also
installed ceph on this node3, copied the ssh keys to node 3 and vice-versa.
GUI does not respond. In master n
Hi,
This is ceph.conf during the cluster deploy. ceph version is mimic.
osd pool default size = 3
osd pool default min size = 1
osd pool default pg num = 1024
osd pool default pgp num = 1024
osd crush chooseleaf type = 1
mon_max_pg_per_osd = 2048
mon_allow_pool_delete = true
mon_pg_warn_min_per_o
Den tors 9 dec. 2021 kl 09:31 skrev Md. Hejbul Tawhid MUNNA
:
> Yes, min_size=1 and size=2 for ssd
>
> for hdd it is min_size=1 and size=3
>
> Could you please advice, about using hdd and ssd in a same ceph cluster. Is
> it okay for production grade openstack?
Mixing ssd and hdd in production is f
Hi,
Yes, min_size=1 and size=2 for ssd
for hdd it is min_size=1 and size=3
Could you please advice, about using hdd and ssd in a same ceph cluster. Is
it okay for production grade openstack?
We have created a new replicated rule for ssd, different pool for ssd and
new disk marking ssd class.
no
Den tors 9 dec. 2021 kl 03:12 skrev Md. Hejbul Tawhid MUNNA
:
>
> Hi,
>
> Yes, we have added new osd. Previously we had only one type disk, hdd. now
> we have added ssd disk separate them with replicated_rule and device class
>
> ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
> 0
29 matches
Mail list logo