attached,
nid001388:~ # ceph auth get client.noir
2022-05-25T09:20:00.731+0200 7f81f63f3700 -1 auth: unable to find a keyring on
/etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
(2) No such file or directory
2022-05-25T09:20:00.731+0200 7f81f6
also,
nid001388:~ # ceph -n client.noir auth get client.noir
Error EACCES: access denied
From: Ilya Dryomov
Sent: Tuesday, May 24, 2022 8:45:23 PM
To: Sopena Ballesteros Manuel
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] rbd command hangs
On Tue, May 24,
Hi,
first, you can bootstrap a cluster by providing the container image
path in the bootstrap command like this:
cephadm --image **:5000/ceph/ceph bootstrap --mon-ip **
Check out the docs for an isolated environment [1], I don't think it's
a good idea to change the runtime the way you did.
Hello,
just found that this "feature" is not restricted to upgrades - I just tried to bootstrap an entirely new cluster with Quincy, also with the fatal
switch to non-root-user: adding the second mon results in
> Unable to write lxmon1:/etc/ceph/ceph.conf: scp: /tmp/etc/ceph/ceph.conf.new:
Per
On Wed, May 25, 2022 at 9:21 AM Sopena Ballesteros Manuel
wrote:
>
> attached,
>
>
> nid001388:~ # ceph auth get client.noir
> 2022-05-25T09:20:00.731+0200 7f81f63f3700 -1 auth: unable to find a keyring
> on
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph
Hi Cephers,
These are the topics discussed in today's meeting:
- *Change in the release process*
- Patrick suggesting version bump PRs vs current commit push approach
- Commits are not signed
- Avoids freezing the branch during hotfixes
- Both for hotfixes and regular dot
I have a small 4-host ceph cluster. Recently, after rebooting one of the
hosts, all of its daemons come back up smoothly EXCEPT the OSDs.
All of the OSDs have identical journal entries, as below.
'ceph-bluestore-tool fsck' fails with:
2022-05-25T17:27:02.208+ 7f45150a70c0 -1 bluefs _check_n
ceph health detail says my 5-node cluster is healthy, yet when I ran
ceph orch upgrade start --ceph-version 16.2.7 everything seemed to go
fine until we got to the OSD section, now for the past hour, every 15
seconds a new log entry of 'Upgrade: unsafe to stop osd(s) at this time
(1 PGs are o
Sorry to be intransigent asking again but is there anyone facing issues on
deploying iscsi-gateway daemons through CEPHADM?
I´m still having issues trying to deploy iscsi-gateways with no success
because service containers fail to load ending up with the error : "Error:
No such object" .
Any help
Do you have any pools with only one replica?
Tim
On 5/25/22, 1:48 PM, "Sarunas Burdulis" wrote:
> ceph health detail says my 5-node cluster is healthy, yet when I ran
> ceph orch upgrade start --ceph-version 16.2.7 everything seemed to go
> fine until we got to the OSD section, n
I was successfully able to get a 'main' build completed.
This means you should be able to push your branches to ceph-ci.git and
get a build now.
Thank you for your patience.
On 5/24/22 18:30, David Galloway wrote:
This maintenance is ongoing. This was a much larger effort than
anticipated.
Great, thanks for all the hard work, David and team!
- Neha
On Wed, May 25, 2022 at 12:47 PM David Galloway wrote:
>
> I was successfully able to get a 'main' build completed.
>
> This means you should be able to push your branches to ceph-ci.git and
> get a build now.
>
> Thank you for your pat
Hello,
Silly question but have you created the pool which will be used by the
gateway??
On Wed, 25 May 2022, 21:25 Heiner Hardt, wrote:
> Sorry to be intransigent asking again but is there anyone facing issues on
> deploying iscsi-gateway daemons through CEPHADM?
>
> I´m still having issues try
Ok, I'm not sure if I'm missing something or if this is a gap in ceph orch
functionality, or what:
On a given host all the OSDs share a single large NVMe drive for DB/WAL storage
and were set up using a simple ceph orch spec file. I'm replacing some of the
OSDs. After they've been removed wit
On 25/05/2022 15.39, Tim Olow wrote:
Do you have any pools with only one replica?
All pools are 'replicated size' 2 or 3, 'min_size' 1 or 2.
--
Sarunas Burdulis
Dartmouth Mathematics
https://math.dartmouth.edu/~sarunas
· https://useplaintext.email ·
___
In your example, you can login to the server in question with the OSD, and
run "ceph-volume lvm zap --osd-id --destroy" and it will purge the
DB/WAL LV. You don't need to reapply your osd spec, it will detect the
available space on the nvme and redploy that OSD.
On Wed, May 25, 2022 at 3:37 PM Ed
That did it, thanks!
It seems like something that should be better documented and/or handled
automatically when replacing drives.
And yeah, I know I don’t have to reapply my OSD spec, but doing so can be
faster than waiting for the cluster to get around to it.
Thanks again.
From: David Orman
17 matches
Mail list logo