Hey all,
I believe I have found the issue. You can follow updates on
https://tracker.ceph.com/issues/68657.
Thanks,
Laura
On Fri, Oct 25, 2024 at 9:29 AM Kristaps Čudars
wrote:
> Experiencing the same problem.
> Disabling balancer helps.
> ___
> ceph
I enabled debug logging with `ceph config set mgr
mgr/cephadm/log_to_cluster_level debug` and viewed the logs with `ceph -W
cephadm --watch-debug`. I can see the orchestrator refreshing the device list,
and this is reflected in the `ceph-volume.log` file on the target osd nodes.
When I restart
Hi all,
Topics we discussed today included an explanation of the CVE fix process,
an evaluation of the OS recommendation documentation, a discussion on some
options about releasing el8 packages for the upcoming quincy point release,
and CEC election results.
Here are the highlights from today's m
Well sure, if you want to do it the EASY way :rolleyes:
> On Oct 28, 2024, at 1:02 PM, Eugen Block wrote:
>
> Or:
>
> ceph osd find {ID}
>
> :-)
>
> Zitat von Robert Sander :
>
>> On 10/28/24 17:41, Dave Hall wrote:
>>
>>> However, it would be nice to have something like 'ceph osd location
Hi,
We have very strange behavior, when LC "restores" previous versions of
objects.
It looks like:
1. We have latest object:
*object1*
2. We remove object and have delete-marker on top of it. We can't see
object1 in bucket listing:
*marker (latest)*
*object1 (not latests, version1)*
3. LC de
Or:
ceph osd find {ID}
:-)
Zitat von Robert Sander :
On 10/28/24 17:41, Dave Hall wrote:
However, it would be nice to have something like 'ceph osd location {id}'
from the command line. If such exists, I haven't seen it.
"ceph osd metadata {id} | jq -r .hostname" will give you the hostna
On 10/28/24 17:41, Dave Hall wrote:
However, it would be nice to have something like 'ceph osd location {id}'
from the command line. If such exists, I haven't seen it.
"ceph osd metadata {id} | jq -r .hostname" will give you the hostname
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schw
On Mon, Oct 28, 2024 at 9:22 AM Anthony D'Atri
wrote:
>
>
> > Yes, but it's irritating. Ideally, I'd like my OSD IDs and hostnames to
> track so that if a server going pong I can find it and fix it ASAP
>
> `ceph osd tree down` etc. (including alertmanager rules and Grafana
> panels) arguably mak
Hi everybody,
apparently, I forgot to report back. The evacuation completed without problems
and we are replacing disks at the moment. This procedure worked like a charm
(please read the thread to see why we didn't just shut down OSDs and used
recovery for rebuild):
1.) For all OSDs: ceph osd
Hello.
The following is on a Reef Podman installation:
In attempting to deal over the weekend with a failed OSD disk, I have
somehow managed to have two OSDs pointing to the same HDD, as shown below.
[image: image.png]
To be sure, the failure occurred on OSD.12, which was pointing to
/dev/sdi.
Ubuntu noble *is* an LTS release, 24.04
> On Oct 28, 2024, at 06:40, Robert Sander wrote:
>
> Hi
>
>> On 10/25/24 19:57, Daniel Brown wrote:
>> Think I’ve asked this before but — has anyone attempted to use a cephadm
>> type install with Debian Nobel running on Arm64? Have tried both Reef an
> Yes, but it's irritating. Ideally, I'd like my OSD IDs and hostnames to track
> so that if a server going pong I can find it and fix it ASAP
`ceph osd tree down` etc. (including alertmanager rules and Grafana panels)
arguably make that faster and easier than everyone having to memorize OSD
Yes, but it's irritating. Ideally, I'd like my OSD IDs and hostnames to
track so that if a server going pong I can find it and fix it ASAP. But
it doesn't take much maintenance to break that scheme and the only thing
more painful than renaming a Ceph host is re-numbering an OSD.
On 10/28/24 06
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi..
If we enable cephfs snapshot will we face performance issue? And does snapshots
take up any specific storage?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
That's unfortunate. For one, it says "some filesystems are more equal
than others".
Back in ancient days, when dinosaurs used time-sharing, you could mount
a remote fileystem at login and logout, whether explicit or timeout
would unmount it. But virtually nobody time-shares anymore, but lots
- Le 23 Oct 24, à 16:31, Maged Mokhtar mmokh...@petasan.org a écrit :
> On 23/10/2024 16:40, Gregory Farnum wrote:
>> On Wed, Oct 23, 2024 at 5:44 AM Maged Mokhtar
>> wrote:
>>
>>
>> This is tricky but i think you are correct, B and C will keep the new
>> object copy and not revert
On 10/18/24 03:14, Shain Miley wrote:
I know that it really shouldn’t matter what osd number gets assigned to the
disk but as the number of osd increases it is much easier to keep track of
where things are if you can control the id when replacing failed disks or
adding new nodes.
My advice:
Hi
On 10/25/24 19:57, Daniel Brown wrote:
Think I’ve asked this before but — has anyone attempted to use a cephadm type
install with Debian Nobel running on Arm64? Have tried both Reef and Squid,
neither gets very far. Do I need to file a request for it?
You mean Ubuntu Noble, right?
For C
Hi,
On 10/26/24 18:45, Tim Holloway wrote:
On the whole, I prefer to use NFS for my clients to use Ceph
filesystem. It has the advantage that NFS client/mount is practically
guaranteed to be pre-installed on all my client systems.
On the other hand, there are downsides. NFS (Ceph/NFS-Ganesha) h
You're right about deleting the service, of course. I wasn't very
clear in my statement, what I actually meant was that it won't be
removed entirely until all OSDs report a different spec in their
unit.meta file. I forgot to add that info in my last response, that's
actually how I've done i
Hi,
I haven't looked too deep into it yet, but I think it's the regular
cephadm check. The timestamps should match those in the
/var/log/ceph/cephadm.log, where you can see something like that:
cephadm ['--image', '{YOUR_REGISTRY}', 'ls']
It goes through your inventory and runs several 'ga
22 matches
Mail list logo