ceph version: 17.2.0 on Ubuntu 22.04
non-containerized ceph from Ubuntu repos
cluster started on luminous
I have been using bcache on filestore on rotating disks for many years
without problems. Now converting OSDs to bluestore, there are some
strange effects.
If I cr
Thanks Adam,
'ceph mgr fail' didn't end up working for me, but it did lead down the
path to getting it working. It looks like one of the managers was borked
somehow. Although it wasn't the manager that looked to have a stray
host, it was the other one. And there also seems to be an issue with
ru
I know there's a bug where when downsizing by multiple mons at once through
cephadm this ghost stray mon daemon thing can end up happening (I think
something about cephadm removing them too quickly in succession, not
totally sure). In those cases, just doing a mgr failover ("ceph mgr fail")
always
Hi All,
I'm getting this error while setting up a ceph cluster. I'm relatively new to
ceph, so there is no telling what kind of mistakes I've been making. I'm using
cephadm, ceph v16 and I apparently have a stray daemon. But it also doesn't
seem to exist and I can't get ceph to forget about it.
Cool, thanks! After I set this config, where can I find the RBD logs?
On Wed, Feb 1, 2023 at 8:51 AM Anthony D'Atri
wrote:
> When the client is libvirt/librbd/QEMU virtualization, IIRC one must set
> these values in the hypervisor’s ceph.conf
>
> > On Feb 1, 2023, at 11:05, Ruidong Gao wrote:
>
distro testing for reef
* https://github.com/ceph/ceph/pull/49443 adds centos9 and ubuntu22 to
supported distros
* centos9 blocked by teuthology bug https://tracker.ceph.com/issues/58491
- lsb_release command no longer exists, use /etc/os-release instead
- ceph stopped depending on lsb_release
Do the counters need to be moved under a separate key? That would
break anything today that currently tries to parse them. We have quite
a bit of internal monitoring that relies on "perf dump" output, but
it's mostly not output that I would expect to gain labels in general
(e.g. bluestore stats).
We successfully did ceph-deploy+octopus+centos7 -> (ceph-deploy
unsupported)+octopus+centos8stream (using leap) -> (ceph-deploy
unsupported)+pacific+centos8stream -> cephadm+pacific+centos8stream
Everything in place. Leap was tested repeatedly till the procedure/sideeffects
were very well know
When the client is libvirt/librbd/QEMU virtualization, IIRC one must set these
values in the hypervisor’s ceph.conf
> On Feb 1, 2023, at 11:05, Ruidong Gao wrote:
>
> Hi,
>
> You can use environment variable to set log level to what you want as below:
> bash-4.4$ export CEPH_ARGS="--debug-rbd=
OK, attachments wont work.
See this:
https://filebin.net/t0p7f1agx5h6bdje
Best
Ken
On 01.02.23 17:22, mailing-lists wrote:
I've pulled a few lines from the log and i've attached this to this
mail. (I hope this works for this mailinglist?)
I found the line 135
[2023-01-26 16:25:00,785][
I've pulled a few lines from the log and i've attached this to this
mail. (I hope this works for this mailinglist?)
I found the line 135
[2023-01-26 16:25:00,785][ceph_volume.process][INFO ] stdout
ceph.block_device=/dev/ceph-808efc2a-54fd-47cc-90e2-c5cc96bdd825/osd-block-2a1d1bf0-300e-4160
Hi,
You can use environment variable to set log level to what you want as below:
bash-4.4$ export CEPH_ARGS="--debug-rbd=20 —debug-rgw=20”
The following client calls will take it.
Ben
> 2023年2月1日 07:10,Jinhao Hu 写道:
>
> Hi,
>
> How can I collect the logs of the RBD client?
>
Hi
I hope this email finds you well. I am reaching out to you because I have
encountered an issue with my CEPH Bluestore cluster and I am seeking your
assistance.
I have a cluster with approximately one billion objects and when I run a PG
query, it shows that I have 27,000 objects per PG.
I have ru
This is an email thread for reporting documentation concerns.
I won't guarantee that our documentation team will drop everything and
immediately service your documentation request, but I do guarantee that
your request will go into the queue of things we will eventually tackle.
https://docs.ceph.c
Hi to all!
We are running a Ceph cluster (Octopus) on (99%) CentOS 7 (deployed at
the time with ceph-deploy) and we would like to upgrade it. As far as I
know for Pacific (and later releases) there aren't packages for CentOS 7
distribution (at least not on download.ceph.com), so we need to upg
Any chance you can share the ceph-volume.log (from the corresponding host)?
It should be in /var/log/ceph//ceph-volume.log. Note that
there might be several log files (log rotation). Ideally, the one that
includes the recreation steps.
Thanks,
On Wed, 1 Feb 2023 at 10:13, mailing-lists wrote:
>
Ah, nice.
service_type: osd
service_id: dashboard-admin-1661788934732
service_name: osd.dashboard-admin-1661788934732
placement:
host_pattern: '*'
spec:
data_devices:
model: MG08SCA16TEY
db_devices:
model: Dell Ent NVMe AGN MU AIC 6.4TB
filter_logic: AND
objectstore: bluestore
17 matches
Mail list logo