In regards to
>
> From the reading you gave me I have understood the following :
> 1 - Set osd_memory_target_autotune to true then set
> autotune_memory_target_ratio to 0.2
> 2 - Or do the math. For my setup I have 384Go per node, each node has 4
> nvme disks of 7.6To, 0.2 of memory is 19.5G. So ea
Hi;
So it was that, create the initial-ceph.conf and use the --config
Now All images are from the local registry.
Thank you all for your help.
Regards.
Le lun. 26 févr. 2024 à 14:09, wodel youchi a
écrit :
> I've read that, but I didn't find how to use it?
> should I use the : --config *CONFIG
I've read that, but I didn't find how to use it?
should I use the : --config *CONFIG_FILE *options?
Le lun. 26 févr. 2024 à 13:59, Robert Sander
a écrit :
> Hi,
>
> On 2/26/24 13:22, wodel youchi wrote:
> >
> > No didn't work, the bootstrap is still downloading the images from quay.
>
> For the
>
> I have another problem, the local registry. I deployed a local registry
> with the required images, then I used cephadm-ansible to prepare my hosts
> and inject the local registry url into /etc/container/registry.conf file
>
> Then I tried to deploy using this command on the admin node:
> cep
Hi,
On 2/26/24 13:22, wodel youchi wrote:
No didn't work, the bootstrap is still downloading the images from quay.
For the image locations of the monitoring stack you have to create an
initical ceph.conf like it is mentioned in the chapter you referred
earlier:
https://docs.ceph.com/en/ree
Hi,
No didn't work, the bootstrap is still downloading the images from quay.
PS : My local registry does not require any login/pass authentication, I
used fake ones since it's mandatory to give them.
cephadm --image 192.168.2.36:4000/ceph/ceph:v17 bootstrap --registry-url
192.168.2.36:4000 --reg
Hi,
On 26.02.24 11:08, wodel youchi wrote:
Then I tried to deploy using this command on the admin node:
cephadm --image 192.168.2.36:4000/ceph/ceph:v17 bootstrap --mon-ip
10.1.0.23 --cluster-network 10.2.0.0/16
After the boot strap I found that it still downloads the images from the
internet,
Thank you all for your help.
@Adam
From the reading you gave me I have understood the following :
1 - Set osd_memory_target_autotune to true then set
autotune_memory_target_ratio to 0.2
2 - Or do the math. For my setup I have 384Go per node, each node has 4
nvme disks of 7.6To, 0.2 of memory is 19
Hi,
just responding to the last questions:
- After the bootstrap, the Web interface was accessible :
- How can I access the wizard page again? If I don't use it the first
time I could not find another way to get it.
I don't know how to recall the wizard, but you should be able
On 21.02.2024 17:07, wodel youchi wrote:
- The documentation of ceph does not indicate what versions of
grafana,
prometheus, ...etc should be used with a certain version.
- I am trying to deploy Quincy, I did a bootstrap to see what
containers were downloaded and their version.
Cephadm does not have some variable that explicitly says it's an HCI
deployment. However, the HCI variable in ceph ansible I believe only
controlled the osd_memory_target attribute, which would automatically set
it to 20% or 70% respectively of the memory on the node divided by the
number of OSDs
11 matches
Mail list logo