s it. In our prod Pacific cluster we use per server
certificates (mgr/dashboard/{host1}/crt, mgr/dashboard/{host2}/crt and
so on).
Maybe you have some remainders in the config-keys? I would check all
of the dashboard/cert related and remove any expired certs/keys.
Zitat von duluxoz :
Hi
they are
stored somewhere else. Try executing the two commands (one for key,
one for cert) again, then restart (disable/enable might be enough, I
can't remember).
Regards, Chris
On 19/12/2024 07:04, duluxoz wrote:
Hi All,
So we've been using the Ceph (v18.2.4) Dashboard
Hi All,
So we've been using the Ceph (v18.2.4) Dashboard with internally
generated TLS Certificates (via our Step-CA CA), one for each of our
three Ceph Manager Nodes.
Everything was working AOK.
The TLS Certificates came up for renewal, which they were successfully
renewed. Accordingly, th
t add the s if
it's not 443.
The only other thing I can think of is, does it work if you use ceph
dashboard create-self-signed-cert?
Cheers,
Curt
On Mon, 23 Sept 2024, 06:23 duluxoz, wrote:
Hi,
ssl_server_port is 8443
On 23/9/24 05:14, Curt wrote:
>
> Hello,
Hi,
ssl_server_port is 8443
On 23/9/24 05:14, Curt wrote:
Hello,
I just used a self sign cert, but it's been a while and remember it
pretty much just working. Out of curiosity, what's is ssl_server_port
set to?
___
ceph-users mailing list -- ce
ny case, it's up to you, but for the kernel ceph client to work, nothing but
the kernel is needed, bindings in the form of files and user space utilities
only complicate the system, like for me
k
Sent from my iPhone
On 24 Aug 2024, at 12:34, duluxoz wrote:
So you wouldn't use multipl
Hi K,
Thanks for getting back to me.
So you wouldn't use multiple ceph.conf files? (Or a combined one, for
that matter?)
Dulux-Oz
On 24/8/24 18:12, Konstantin Shalygin wrote:
Hi,
On 24 Aug 2024, at 10:57, duluxoz wrote:
How do I set up the ceph.conf file(s) on my clients so that
Hi All,
I haven't been able to find anything in the doco or on the web about this.
I'm blessed with having available to me 2 different Ceph Clusters within
our organisation (a "regular" Ceph Cluster and a Hyper-converged Proxmox
Ceph Cluster).
Both Ceph Clusters have a CephFS system on them
Hi All,
I'm trying to replace an OSD in our cluster.
This is on Reef 18.2.2 on Rocky 9.4.
I performed the following steps (from this page of the Ceph Doco:
https://docs.ceph.com/en/reef/rados/operations/add-or-rm-osds/):
1. Make sure that it is safe to destroy the OSD:
`while!cephosdsafe-
Hi PWB
Both ways (just to see if both ways would work) - remember, this is a
brand new box, so I had the luxury of "blowing away" the first iteration
to test the second:
* ceph orch daemon add osd ceph1:/dev/vg_osd/lv_osd
* ceph orch daemon add osd ceph1:vg_osd/lv_osd
Cheers
Dulux-Oz
___
@Eugen, @Cedric
DOH!
Sorry lads, my bad! I had a typo in my lv name - that was the cause of
my issues.
My apologises for being so stupid - and *thank you* for the help; having
a couple of fresh brains on things help to eliminate possibilities and
so narrow down onto the cause of the issue.
Nope, tried that, it didn't work (similar error messages).
Thanks for input :-)
So, still looking for ideas on this one - thanks in advance
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi All,
Is the following a bug or some other problem (I can't tell) :-)
Brand new Ceph (Reef v18.2.3) install on Rocky Linux v9.4 - basically,
its a brand new box.
Ran the following commands (in order; no issues until final command):
1. pvcreate /dev/sda6
2. vgcreate vg_osd /dev/sda6
3. lvc
Hi All,
I've gone and gotten myself into a "can't see the forest for the trees"
state, so I'm hoping someone can take pity on me and answer a really dumb Q.
So I've got a CephFS system happily bubbling along and a bunch of
(linux) workstations connected to a number of common shares/folders. T
Thanks Sake,
That recovered just under 4 Gig of space for us
Sorry about the delay getting back to you (been *really* busy) :-)
Cheers
Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Eugen,
Thank you for a viable solution to our underlying issue - I'll attempt
to implement it shortly. :-)
However, with all the respect in world, I believe you are incorrect when
you say the doco is correct (but I will be more than happy to be proven
wrong). :-)
The relevant text (ex
Hi Zac,
Any movement on this? We really need to come up with an answer/solution
- thanks
Dulux-Oz
On 19/04/2024 18:03, duluxoz wrote:
Cool!
Thanks for that :-)
On 19/04/2024 18:01, Zac Dover wrote:
I think I understand, after more thought. The second command is
expected to work after
Hi All,
*Something* is chewing up a lot of space on our `\var` partition to the
point where we're getting warnings about the Ceph monitor running out of
space (ie > 70% full).
I've been looking, but I can't find anything significant (ie log files
aren't too big, etc) BUT there seem to be a h
Cool!
Thanks for that :-)
On 19/04/2024 18:01, Zac Dover wrote:
I think I understand, after more thought. The second command is
expected to work after the first.
I will ask the cephfs team when they wake up.
Zac Dover
Upstream Docs
Ceph Foundation
On Fri, Apr 19, 2024 at 17:51, duluxoz
before I can determine
whether the documentation is wrong.
Zac Dover
Upstream Docs
Ceph Foundation
On Fri, Apr 19, 2024 at 17:51, duluxoz mailto:On
Fri, Apr 19, 2024 at 17:51, duluxoz <> wrote:
Hi All,
In reference to this page from the Ceph documentation:
https://docs.ceph.com/en/
Hi All,
In reference to this page from the Ceph documentation:
https://docs.ceph.com/en/latest/cephfs/client-auth/, down the bottom of
that page it says that you can run the following commands:
~~~
ceph fs authorize a client.x /dir1 rw
ceph fs authorize a client.x /dir2 rw
~~~
This will allo
I don't know Marc, i only know what I had to do to get the thing
working :-)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi All,
OK, an update for everyone, a note about some (what I believe to be)
missing information in the Ceph Doco, a success story, and an admission
on my part that I may have left out some important information.
So to start with, I finally got everything working - I now have my 4T
RBD Image
Hi Alexander,
Already set (and confirmed by running the command again) - no good, I'm
afraid.
So I just restart with a brand new image and ran the following commands
on the ceph cluster and the host respectively. Results are below:
On the ceph cluster:
[code]
rbd create --size 4T my_pool.
Hi Curt,
Blockdev --getbsz: 4096
Rbd info my_pool.meta/my_image:
~~~
rbd image 'my_image':
size 4 TiB in 1048576 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 294519bf21a1af
data_pool: my_pool.data
block_name_prefix: rbd_data.30.294519bf
, Alwin Antreich wrote:
Hi,
March 24, 2024 at 8:19 AM, "duluxoz" wrote:
Hi,
Yeah, I've been testing various configurations since I sent my last
email - all to no avail.
So I'm back to the start with a brand new 4T image which is rbdmapped to
/dev/rbd0.
Its not formatted (yet
Hi Curt,
Nope, no dropped packets or errors - sorry, wrong tree :-)
Thanks for chiming in.
On 24/03/2024 20:01, Curt wrote:
I may be barking up the wrong tree, but if you run ip -s link show
yourNicID on this server or your OSDs do you see any
errors/dropped/missed?
Hi,
Yeah, I've been testing various configurations since I sent my last
email - all to no avail.
So I'm back to the start with a brand new 4T image which is rbdmapped to
/dev/rbd0.
Its not formatted (yet) and so not mounted.
Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs
/dev/rbd/
Hi Alexander,
DOH!
Thanks for pointing out my typo - I missed it, and yes, it was my
issue. :-)
New issue (sort of): The requirement of the new RBD Image is 2 TB in
size (its for a MariaDB Database/Data Warehouse). However, I'm getting
the following errors:
~~~
mkfs.xfs: pwrite failed:
On 23/03/2024 18:25, Konstantin Shalygin wrote:
Hi,
Yes, this is generic solution for end users mounts - samba gateway
k
Sent from my iPhone
Thanks Konstantin, I really appreciate the help
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubs
On 23/03/2024 18:22, Alexander E. Patrakov wrote:
On Sat, Mar 23, 2024 at 3:08 PM duluxoz wrote:
Almost right. Please set up a cluster of two SAMBA servers with CTDB,
for high availability.
Cool - thanks Alex, I really appreciate it :-)
___
ceph
On 23/03/2024 18:00, Alexander E. Patrakov wrote:
Hi Dulux-Oz,
CephFS is not designed to deal with mobile clients such as laptops
that can lose connectivity at any time. And I am not talking about the
inconveniences on the laptop itself, but about problems that your
laptop would cause to other
Hi All,
I'm trying to mount a Ceph Reef (v18.2.2 - latest version) RBD Image as
a 2nd HDD on a Rocky Linux v9.3 (latest version) host.
The EC pool has been created and initialised and the image has been
created.
The ceph-common package has been installed on the host.
The correct keyring ha
Hi All,
I'm looking for some help/advice to solve the issue outlined in the heading.
I'm running CepfFS (name: cephfs) on a Ceph Reef (v18.2.2 - latest
update) cluster, connecting from a laptop running Rocky Linux v9.3
(latest update) with KDE v5 (latest update).
I've set up the laptop to co
that it’s using the
default port and not a custom one, also be aware the v1 protocol uses 6789 by
default.
Increasing the messenger log level to 10 might also be useful: debug ms = 10.
Regards,
Lucian
On 28 Feb 2024, at 11:05, duluxoz wrote:
Hi All,
I'm looking for some pointers/he
sections
Also note the port is the 6789 not 3300.
-Original Message-
From: duluxoz
Sent: Wednesday, February 28, 2024 4:05 AM
To:ceph-users@ceph.io
Subject: [ceph-users] CephFS On Windows 10
Hi All,
I'm looking for some pointers/help as to why I can't get my Win10 PC to
Thanks for the info Kefu - hmm, I wonder who I should raise this with?
On 08/03/2024 19:57, kefu chai wrote:
On Fri, Mar 8, 2024 at 3:54 PM duluxoz wrote:
Hi All,
The subject pretty much says it all: I need to use cephfs-shell
and its
not installed on my Ceph Node, and I
Hi All,
The subject pretty much says it all: I need to use cephfs-shell and its
not installed on my Ceph Node, and I can't seem to locate which package
contains it - help please. :-)
Cheers
Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.i
Hi All,
I don't know how its happened (bad backup/restore, bad config file
somewhere, I don't know) but my (DEV) Ceph Cluster is in a very bad
state, and I'm looking for pointers/help in getting it back running
(unfortunate, a complete rebuild/restore is *not* an option).
This is on Ceph Ree
Hi All,
I'm looking for some pointers/help as to why I can't get my Win10 PC
to connect to our Ceph Cluster's CephFS Service. Details are as follows:
Ceph Cluster:
- IP Addresses: 192.168.1.10, 192.168.1.11, 192.168.1.12
- Each node above is a monitor & an MDS
- Firewall Ports: open (ie 33
te:
Out of curiosity, how are you mapping the rbd? Have you tried using
guestmount?
I'm just spitballing, I have no experience with your issue, so
probably not much help or useful.
On Mon, 5 Feb 2024, 10:05 duluxoz, wrote:
~~~
Hello,
I think that /dev/rbd* devices are fliter
~~~
Hello,
I think that /dev/rbd* devices are flitered "out" or not filter "in" by the
fiter
option in the devices section of /etc/lvm/lvm.conf.
So pvscan (pvs, vgs and lvs) don't look at your device.
~~~
Hi Gilles,
So the lvm filter from the lvm.conf file is set to the default of `filter = [
this?
Cheers
On 04/02/2024 19:34, Jayanth Reddy wrote:
Hi,
Anything with "pvs" and "vgs" on the client machine where there is
/dev/rbd0?
Thanks
----
*From:* duluxoz
*Sent:* Sunday, February 4, 2024 1:
"lvs" shows any logical volume on the system ?
On Sun, Feb 4, 2024, 08:56 duluxoz wrote:
Hi All,
All of this is using the latest version of RL and Ceph Reef
I've got an existing RBD Image (with data on it - not "critical"
as I've
got a back up,
Hi All,
All of this is using the latest version of RL and Ceph Reef
I've got an existing RBD Image (with data on it - not "critical" as I've
got a back up, but its rather large so I was hoping to avoid the restore
scenario).
The RBD Image used to be server out via an (Ceph) iSCSI Gateway, bu
Hi All,
Quick Q: How easy/hard is it to change the IP networks of:
1) A Ceph Cluster's "Front-End" Network?
2) A Ceph Cluster's "Back-End" Network?
Is it a "simply" matter of:
a) Placing the Nodes in maintenance mode
b) Changing a config file (I assume it's /etc/ceph/ceph.conf) on each Node
Hi All,
In regards to the monitoring services on a Ceph Cluster (ie Prometheus,
Grafana, Alertmanager, Loki, Node-Exported, Promtail, etc) how many
instances should/can we run for fault tolerance purposes? I can't seem
to recall that advice being in the doco anywhere (but of course, I
probabl
ph-node-00 ~]# cephadm ls | grep "mgr."
"name": "mgr.ceph-node-00.aoxbdg",
"systemd_unit":
"ceph-e877a630-abaa-11ee-b7ce-52540097c...@mgr.ceph-node-00.aoxbdg",
"service_name": "mgr",
and you can use that
incy/cephadm/operations/#watching-cephadm-log-messages
On Fri, Jan 5, 2024 at 2:54 PM duluxoz wrote:
Yeap, can do - are the relevant logs in the "usual" place or
buried somewhere inside some sort of container (typically)? :-)
On 05/01/2024 20:14, Nizamudeen A wrote:
error? It could have
some tracebacks
which can give more info to debug it further.
Regards,
On Fri, Jan 5, 2024 at 2:00 PM duluxoz wrote:
Hi Nizam,
Yeap, done all that - we're now at the point of creating the iSCSI
Target(s) for the gateway (via the Dashboard and/or the C
ly by
ceph dashboard iscsi-gateway-add -i
[]
ceph dashboard iscsi-gateway-rm
which you can find the documentation here:
https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-iscsi-management
Regards,
Nizam
On Fri, Jan 5, 2024 at 12:53 PM duluxoz wrote:
Hi All,
A little help p
Hi All,
A little help please.
TL/DR: Please help with error message:
~~~
REST API failure, code : 500
Unable to access the configuration object
Unable to contact the local API endpoint (https://localhost:5000/api)
~~~
The Issue
I've been through the documentation and can't find wh
Hi All,
A follow up: So, I've got all the Ceph Nodes running Reef v18.2.1 on
RL9.3, and everything is working - YAH!
Except...
The Ceph Dashboard shows 0 of 3 iSCSI Gateways working, and when I click
on that panel it returns a "Page not Found" message - so I *assume*
those are the three "or
Hi All,
Just successfully(?) completed a "live" update of the first node of a
Ceph Quincy cluster from RL8 to RL9. Everything "seems" to be working -
EXCEPT the iSCSI Gateway on that box.
During the update the ceph-iscsi package was removed (ie
`ceph-iscsi-3.6-2.g97f5b02.el8.noarch.rpm` - th
Hi All,
I find myself in the position of having to change the k/m values on an
ec-pool. I've discovered that I simply can't change the ec-profile, but
have to create a "new ec-profile" and a "new ec-pool" using the new
values, then migrate the "old ec-pool" to the new (see:
https://ceph.io/en
opics:
1. Re: EC Profiles & DR (David Rivera)
2. Re: EC Profiles & DR (duluxoz)
3. Re: EC Profiles & DR (Eugen Block)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
%(web_
n=osd when you
should use crush-failure-domain=host. With three hosts, you should use
k=2, m=1; this is not recommended in production environment.
On Mon, Dec 4, 2023, 23:26 duluxoz wrote:
Hi All,
Looking for some help/explanation around erasure code pools, etc.
I set up a 3
Hi All,
Looking for some help/explanation around erasure code pools, etc.
I set up a 3-node Ceph (Quincy) cluster with each box holding 7 OSDs
(HDDs) and each box running Monitor, Manager, and iSCSI Gateway. For the
record the cluster runs beautifully, without resource issues, etc.
I created
Sorry, let me qualify things / try to make them simpler:
When upgrading from a Rocky Linux 8.6 Server running Ceph-Quincy to
Rocky Linux 9.1 Server running Ceph-Quincy (ie an in-place upgrade of a
host-node in an existing cluster):
- What is the update procedure?
- Can we use the "standard(?
02/2023 16:43, Konstantin Shalygin wrote:
You are mentioned that your cluster is Quincy, the el9 package are
also for Quincy. What exactly upgrade you mean?
k
Sent from my iPhone
On 11 Feb 2023, at 12:29, duluxoz wrote:
That's great - thanks.
Any idea if there are any upgrade instru
Seems packages el9_quincy are available [1]
You can try
k
[1] https://download.ceph.com/rpm-quincy/el9/x86_64/
On 10 Feb 2023, at 13:23, duluxoz wrote:
Sorry if this was mentioned previously (I obviously missed it if it
was) but can we upgrade a Ceph Quincy Host/Cluster from Rocky Linux
(RH
Hi All,
Sorry if this was mentioned previously (I obviously missed it if it was)
but can we upgrade a Ceph Quincy Host/Cluster from Rocky Linux (RHEL)
v8.6/8.7 to v9.1 (yet), and if so, what is / where can I find the
procedure to do this - ie is there anything "special" that needs to be
done
Hi Eneko,
Well, that's the thing: there are a whole bunch of ceph-guest-XX.log
files in /var/log/caeh/; most of them are empty, a handful are up to 250
Kb in size, and this one () keeps on growing - and where not sure where
they're coming from (ie there's nothing that we can see in the conf fi
Hi All,
Thanks to Eneko Lacunza, E Taka, and Anthony D'Atri for replying - all
that advice was really helpful.
So, we finally tracked down our "disk eating monster" (sort of). We've
got a "runaway" ceph-guest-NN that is filling up its log file
(/var/log/ceph/ceph-guest-NN.log) and eventually
Hi All,
Got a funny one, which I'm hoping someone can help us with.
We've got three identical(?) Ceph Quincy Nodes running on Rocky Linux
8.7. Each Node has 4 OSDs, plus Monitor, Manager, and iSCSI G/W services
running on them (we're only a small shop). Each Node has a separate 16
GiB partiti
new rule, you can set your pool to
use the new pool id anytime you are ready.
On Sun, Sep 25, 2022 at 12:49 AM duluxoz wrote:
Hi Everybody (Hi Dr. Nick),
TL/DR: Is is possible to have a "2-Layer" Crush Map?
I think it is (although I'm not sure about how to set
Hi Everybody (Hi Dr. Nick),
TL/DR: Is is possible to have a "2-Layer" Crush Map?
I think it is (although I'm not sure about how to set it up).
My issue is that we're using 4-2 Erasure coding on our OSDs, with 7-OSDs
per OSD-Node (yes, the Cluster is handling things AOK - we're running at
abou
Hi Everybody (Hi Dr. Nick),
A I've just figured it out - it should have been an underscore
(`_`) not a dash (`-`) in `ceph mgr module enable diskprediction_local`
"Sorry about that Chief"
And sorry for the double-post (damn email client).
Cheers
Dulux-Oz
__
Hi Everybody (Hi Dr. Nick),
So, I'm trying to get my Ceph Quincy Cluster to recognise/interact with
the "diskprediction-local" manager module.
I have the "SMARTMon Tools" and the "ceph-mgr-diskprediction-local"
package installed on all of the relevant nodes.
Whenever I attempt to enable the
Hi Everybody (Hi Dr. Nick),
So, I'm trying to get my Ceph Quincy Cluster to recognise/interact with
the "diskprediction-local" manager module.
I have the "SMARTMon Tools" and the "ceph-mgr-diskprediction-local"
package installed on all of the relevant nodes.
Whenever I attempt to enable the
Hi Everybody (Hi Dr. Nick),
I'm attacking this issue from both ends (ie from the Ceph-end and from
the oVirt-end - I've posted questions on both mailing lists to ensure we
capture the required knowledge-bearer(s)).
We've got a Ceph Cluster set up with three iSCSI Gateways configured,
and we
on ubuntu-gw01
- checking iSCSI/API ports on ubuntu-gw02
1 gateway is inaccessible - updates will be disabled
Querying ceph for state information
Gathering pool stats for cluster 'ceph'
Regards,
Bailey
-Original Message-
From: duluxoz
Sent: September 9, 2022 4:11 AM
To: Bail
, you can use 'api_secure
= true'
# to switch to https mode.
# To support the API, the bear minimum settings are:
api_secure = False
# Optional settings related to the CLI/API service
api_user = admin
cluster_name = ceph
loop_delay = 1
pool = rbd
trusted_ip_list = X.X.X.X,X.X.X.X,X.X.X.X,X.X.X.X
Hi All,
I've followed the instructions on the CEPH Doco website on Configuring
the iSCSI Target. Everything went AOK up to the point where I try to
start the rbd-target-api service, which fails (the rbd-target-gw service
started OK).
A `systemctl status rbd-target-api` gives:
~~~
rbd-target
Hi All,
So, I've been researching this for days (including reading this
mailing-list), and I've had no luck what-so-ever in resolving my issue.
I'm hoping someone here can point be in the correct direction.
This is a brand new (physical) machine, and I've followed the Manual
Deployment instr
rom the mount command. The mount command itself gives:
mount: /my-rbd-bloc-device: special device /dev/rbd0p1 does not exist
(same as before I updated my-id)
Cheers
Matthew J
On 23/03/2021 17:34, Ilya Dryomov wrote:
On Tue, Mar 23, 2021 at 6:13 AM duluxoz wrote:
Hi All,
I've got a new is
Hi All,
I've got a new issue (hopefully this one will be the last).
I have a working Ceph (Octopus) cluster with a replicated pool
(my-pool), an erasure-coded pool (my-pool-data), and an image (my-image)
created - all *seems* to be working correctly. I also have the correct
Keyring specified
Yeap - that was the issue: an incorrect CRUSH rule
Thanks for the help
Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Guys,
So, new issue (I'm gonna get the hang of this if it kills me :-) ).
I have a working/healthy Ceph (Octopus) Cluster (with qemu-img, libvert,
etc, installed), and an erasure-coded pool called "my_pool". I now need
to create a "my_data" image within the "my_pool" pool. As this is for a
Ah, right, that makes sense - I'll have a go at that
Thank you
On 16/03/2021 19:12, Janne Johansson wrote:
pgs: 88.889% pgs not active
6/21 objects misplaced (28.571%)
256 creating+incomplete
For new clusters, "creating+incomplete" sounds like you create
OK, so I set autoscaling to off for all five pools, and the "ceph -s"
has not changed:
~~~
cluster:
id: [REDACTED]
health: HEALTH_WARN
Reduced data availability: 256 pgs inactive, 256 pgs incomplete
Degraded data redundancy: 12 pgs undersized
services:
PG, I
would suggest disabling the PG Auto Scaler on small test clusters.
Thanks
On 16 Mar 2021, 10:50 +0800, duluxoz , wrote:
Hi Guys,
Is the below "ceph -s" normal?
This is a brand new cluster with (at the moment) a single Monitor
and 7
OSDs (each 6 GiB) that ha
Hi Guys,
Is the below "ceph -s" normal?
This is a brand new cluster with (at the moment) a single Monitor and 7
OSDs (each 6 GiB) that has no data in it (yet), and yet its taking
almost a day to "heal itself" from adding in the 2nd OSD.
~~~
cluster:
id: [REDACTED]
health: HEAL
Hi All,
My ceph-mgr keeps stopping (for some unknown reason) after about an hour
or so (but has run for up to 2-3 hours before stopping). Up till now
I've simple restarted it with 'ceph-mgr -i ceph01'.
Is this normal behaviour, or if it isn't, what should I be looking for
in the logs?
I wa
Hi Everyone,
Thanks to all for both the online and PM help - once it was pointed out
that the existing (Octopus) Documentation was... less than current I
ended up using the ceph-volume command.
A couple of follow-up questions:
When using ceph-volume lvm create:
1. Can you specify an osd num
Yes, the OSD Key is in the correct folder (or, at least, I think it is).
The line in the steps I did is:
sudo -u ceph ceph auth get-or-create osd.0 osd 'allow *' mon 'allow
profile osd' mgr 'allow profile osd' -o /var/lib/ceph/osd/ceph-0/keyring
This places the osd-0 key in the file 'k
86 matches
Mail list logo