Okay, it looks like you just need some further cleanup regarding your
phantom hosts, for example:
ceph osd crush remove www2
ceph osd crush remove docker0
and so on.
Regarding the systemd unit (well, cephadm also generates one, but with
the fsid as already mentioned), you could just stop an
This particular system has it both ways and neither wants to work.
The peculiar thing was that when I first re-created the OSD with
cephadm, it was reported that this was an "unmanaged node". So I ran
the same cephadm agin and THAT time it showed up. So I suspect that the
ceph-osd@4.service was th
Hi,
I have a multisite system with two sites on 18.2.2, on Rocky 8.
I have set up a sync policy to allow replication between sites. I have also
createda policy for a given bucket that prevents replication on that given
bucket. This allworks just fine, and objects I create in that bucket on side
Hi,
containerized daemons usually have the fsid in the systemd unit, like
ceph-{fsid}@osd.5
Is it possible that you have those confused? Check the
/var/lib/ceph/osd/ directory to find possible orphaned daemons and
clean them up.
And as previously stated, it would help to see your osd tree
Incidentally, I just noticed that my phantom host isn't completely
gone. It's not in the host list, either command-line or dashboard, but
it does list (with no assets) as a host under "ceph osd tree".
---
More seriously, I've been having problems with OSDs that report as
being both up and down at
There’s more to it than bottlenecking.
RAS, man. RAS.
> On Jul 12, 2024, at 3:58 PM, John Jasen wrote:
>
> How large of a ceph cluster are you planning on building, and what network
> cards/speeds will you be using?
>
> A lot of the talk about RAID HBA pass-through being sub-optimal probably
> Hi,
>
> just one question coming to mind, if you intend to migrate the images
> separately, is it really necessary to set up mirroring? You could just 'rbd
> export' on the source cluster and 'rbd import' on the destination cluster.
That can be slower if using a pipe, and require staging sp
How large of a ceph cluster are you planning on building, and what network
cards/speeds will you be using?
A lot of the talk about RAID HBA pass-through being sub-optimal probably
won't be your bottleneck unless you're aiming for a large cluster at
100Gb/s speeds, in my opinion.
On Fri, Jul 12, 2
> Okay it seems like we don't really have a definitive answer on whether it's
> OK to use a RAID controller or not and in what capacity.
It’s okay to use it if that’s what you have.
For new systems, eschew the things. They cost money for something you can do
with MD for free and are finicky.
Okay it seems like we don't really have a definitive answer on whether it's OK
to use a RAID controller or not and in what capacity.
Passthrough meaning:
Are you saying that it's OK to use a raid controller where the disks are in
non-RAID mode?
Are you saying that it's OK to use a raid controll
date
6f328bc5b0736f23ae2cdf68ccffe1a45c705dd1636f61b999350ae18f8d5ad1
2024-07-12 12:07:00,439 - MainThread - botocore.auth - DEBUG - StringToSign:
AWS4-HMAC-SHA256
20240712T120700Z
20240712/podspace/sns/aws4_request
38b7d8721abdd98c214ea763d9dcc324fcbc5982990353140f6b73445
- Le 11 Juil 24, à 20:50, Dave Hall kdh...@binghamton.edu a écrit :
> Hello.
>
> I would like to use mirroring to facilitate migrating from an existing
> Nautilus cluster to a new cluster running Reef. RIght now I'm looking at
> RBD mirroring. I have studied the RBD Mirroring section of th
- Le 11 Juil 24, à 0:23, Richard Bade hitr...@gmail.com a écrit :
> Hi Casey,
> Thanks for that info on the bilog. I'm in a similar situation with
> large omap objects and we have also had to reshard buckets on
> multisite losing the index on the secondary.
> We also now have a lot of bucket
Am 12.07.2024 um 10:57 schrieb Robert Sander:
... I would suggest to use Ubuntu 22.04 LTS as the base operating
system. You can use cephadm on top of that without issues.
yes, that's right. But I already upgraded my systems to 24.04, may be to
early, my fault. Currently, it's all testing and
Hi,
On 7/12/24 10:47, tpDev Tester wrote:
Finally, I'm looking for a solution for production use and it would be
great if I don't have to leave the usual Ubuntu procedures, especially
when it comes to updates. We are also confused about the "RC vs.
LTS"-thing.
I would suggest to use Ubuntu
Hi,
thanks for your response.
Am 12.07.2024 um 10:24 schrieb Stefan Kooman:
... Note: just to be sure, you do _NOT_ want to use Ceph from Ubuntu
24.04 repositories. The 19.2.x release is not out yet (still RC) and
this is a Ubuntu released version (why they ship this in a Ubuntu LTS
version in
On 12-07-2024 09:33, tpDev Tester wrote:
Hi,
Am 11.07.2024 um 14:20 schrieb John Mulligan:
...
as far as I know, we still have an issue
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2063456
with ceph on 24.04. I tried the offered fix, but was still unable to
establish a running clust
Hi,
Am 11.07.2024 um 14:20 schrieb John Mulligan:
...
as far as I know, we still have an issue
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2063456
with ceph on 24.04. I tried the offered fix, but was still unable to
establish a running cluster (may be my fault, I'm still a newbie to
18 matches
Mail list logo