Hello and a happy new year!
I'm wondering if there are some structural changes or something
regarding the release page [1]. It still doesn't contain version
18.2.1 (Reef) and the latest two Quincy releases (17.2.6, 17.2.7) are
missing as well. And for Pacific it's even worse, the latest ent
Hi Eugen,
the release info is current only in the latest branch of the
documentation: https://docs.ceph.com/en/latest/releases/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsg
Thanks! Nevertheless, IMO the "Reef" branch should also contain the
latest Reef release notes (18.2.1).
Zitat von Robert Sander :
Hi Eugen,
the release info is current only in the latest branch of the
documentation: https://docs.ceph.com/en/latest/releases/
Regards
--
Robert Sander
Heinl
Eugen,
Thanks for pointing this out.
I've backported the latest Reef release notes from the "latest" branch to the
"reef" branch. Here is the PR associated with this backport:
https://github.com/ceph/ceph/pull/55049
When this PR is merged, the Reef release notes will be current in the /reef
b
Hi,
I am bootstrapping a ceph cluster using cephadm, and our cluster uses 3
networks.
We have
- 1 network as public network (10.X.X.0/24) (pub)
- 1 network as cluster network (10.X.Y.0/24) (cluster)
- 1 network for management (172.Z.Z.0/24) (mgmt)
The nodes are reachable using SSH only on mgmt
Hi,
On 1/3/24 14:51, Luis Domingues wrote:
But when I bootstrap my cluster, I set my MON IP and CLUSTER NETWORK, and then
the bootstrap process tries to add my bootstrap node using the MON IP.
IMHO the bootstrap process has to run directly on the first node.
The MON IP is local to this node.
Hi Robert,
Thanks for your reply.
I am bootstrapping from the first node that will become the first monitor,
let's call it mon1. I get 1 monitor and 1 manager deployed.
My issue is that mon1 cannot connect via SSH to itself using pub network, and
bootstrap fail at the end when cephadm tries to
Hi Luis,
On 1/3/24 16:12, Luis Domingues wrote:
My issue is that mon1 cannot connect via SSH to itself using pub network, and
bootstrap fail at the end when cephadm tries to add mon1 to the list of hosts.
Why? The public network should not have any restrictions between the
Ceph nodes. Same
Happy 2024!
Today's CLT meeting covered the following:
1. 2024 brings a focus on performance of Crimson (some information here:
https://docs.ceph.com/en/reef/dev/crimson/crimson/ )
1. Status is available here: https://github.com/ceph/ceph.io/pull/635
2. There will be a new Crimson perform
> Why? The public network should not have any restrictions between the
> Ceph nodes. Same with the cluster network.
Internal policies and network rules.
Luis Domingues
Proton AG
On Wednesday, 3 January 2024 at 16:15, Robert Sander
wrote:
> Hi Luis,
>
> On 1/3/24 16:12, Luis Domingues wrot
Hello,
In our Ceph cluster we encountered issues while attempting to execute
"radosgw-admin" command on client side using cephx user having read only
permission. Whenever we are executing "radosgw-admin user list" command it is
throwing an error.
"ceph version 18.2.1 (7fe91d5d5842e04be3b4f514
Hi - I have a drive that is starting to show errors, and was wondering what the
best way to replace it is.
I am on Ceph 18.2.1, and using cephadm/containers
I have 3 hosts, and each host has 4 4Tb drives with a 2 tb NVME device splt
amongst them for WAL/DB, and 10 GB Networking.
Option 1: S
Hi,
in such a setup I also prefer option 2, we've done this since lvm came
into play with OSDs, just not with cephadm yet. But we have a similar
configuration and one OSD starts to fail as well. I'm just waiting for
the replacement drive to arrive. ;-)
Regards,
Eugen
Zitat von "Robert W.
Hi, thanks for your answers!
On mån, jan 1 2024 at 17:00:59 -0500, Anthony D'Atri
wrote:
Hi and thanks for your answers!
So my understanding from this, make sure that the "admin" node
have a fast CPU
You don’t strictly need an admin node as such. Only worry about
clock rate if you’re
Hi,
check routing table and default gateway and eventually fix it.
use IP instead of dns name.
I have more complicated situation :D
I have more than 3 public networks and cluster networks…
BR,
Sebastian
> On Jan 3, 2024, at 16:40, Luis Domingues wrote:
>
>
>> Why? The public network shoul
>> You don’t strictly need an admin node as such. Only worry about clock rate
>> if you’re doing CephFS.
> So an admin node is not required?
It isn’t. An admin node basically is any system with the admin keys installed.
With production clusters there can be some advantages to having one, even
Is it possible to use Ceph as a root filesystem for a pxe booted host?
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
The new dashboard refreshes every 5 seconds (not 25 seconds). But the
Cluster Utilization chart refreshes in sync with the
scrape interval of prometheus (which is defaulted to 15s unless explicitly
changed in the prometheus configuration).
Are you seeing the whole dashboard gets refreshed aft
18 matches
Mail list logo