[ceph-users] Moving from ceph-ansible to cephadm and upgrading from pacific to octopus

2023-12-07 Thread wodel youchi
Hi, I have an Openstack platform deployed with Yoga and ceph-ansible pacific on Rocky 8. Now I need to do an upgrade to Openstack zed with octopus on Rocky 9. This is the path of the upgrade I have traced - upgrade my nodes to Rocky 9 keeping Openstack yoga with ceph-ansible pacific. - convert c

[ceph-users] Some questions about cephadm

2024-02-21 Thread wodel youchi
Hi, I have some questions about ceph using cephadm. I used to deploy ceph using ceph-ansible, now I have to move to cephadm, I am in my learning journey. - How can I tell my cluster that it's a part of an HCI deployment? With ceph-ansible it was easy using is_hci : yes - The documentat

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
hboard user with your desired role (e. g. > administrator) from the CLI: > > ceph dashboard ac-user-create [] -i > > > > - I had a problem with telemetry, I did not configure telemetry, > then > > when I clicked the button, the web gui became > inacces

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
er v1.5.0 0da6a335fe13 15 months ago 23.9 MB Regards. Le lun. 26 févr. 2024 à 11:42, Robert Sander a écrit : > Hi, > > On 26.02.24 11:08, wodel youchi wrote: > > > Then I tried to deploy using this command on the admin node: > > cephadm --image 192.168.2.36:4000/ceph/ceph:v1

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
I've read that, but I didn't find how to use it? should I use the : --config *CONFIG_FILE *options? Le lun. 26 févr. 2024 à 13:59, Robert Sander a écrit : > Hi, > > On 2/26/24 13:22, wodel youchi wrote: > > > > No didn't work, the bootstrap is still downloadi

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
Hi; So it was that, create the initial-ceph.conf and use the --config Now All images are from the local registry. Thank you all for your help. Regards. Le lun. 26 févr. 2024 à 14:09, wodel youchi a écrit : > I've read that, but I didn't find how to use it? > should I us

[ceph-users] Migration from ceph-ansible to Cephadm

2024-02-29 Thread wodel youchi
Hi, I am in the middle of migration from ceph-ansible to cephadm (version quincy), so far so good ;-). And I have some questions : - I still have the ceph-crash container, what should I do with it? - The new rgw and mds daemons have some random string in their names (like rgw.opsrgw.controllera.*p

[ceph-users] Ceph orch doesn't execute commands and doesn't report correct status of daemons

2024-03-01 Thread wodel youchi
Hi, I have finished the conversion from ceph-ansible to cephadm yesterday. Everything seemed to be working until this morning, I wanted to redeploy rgw service to specify the network to be used. So I deleted the rgw services with ceph orch rm, then I prepared a yml file with the new conf. I appli

[ceph-users] Re: Ceph orch doesn't execute commands and doesn't report correct status of daemons

2024-03-01 Thread wodel youchi
Hi, I'll try the 'ceph mgr fail' and report back. In the meantime, my problem with the images... I am trying to use my local registry to deploy the different services. I don't know how to use the 'apply' and force my cluster to use my local registry. So basically, what I am doing so far is : 1 -

[ceph-users] [Quincy] NFS ingress mode haproxy-protocol not recognized

2024-03-03 Thread wodel youchi
Hi; I tried to create an NFS cluster using this command : [root@controllera ceph]# ceph nfs cluster create mynfs "3 controllera controllerb controllerc" --ingress --virtual_ip 20.1.0.201 --ingress-mode haproxy-protocol Invalid command: haproxy-protocol not in default|keepalive-only And I got this

[ceph-users] [Quincy] cannot configure dashboard to listen on all ports

2024-03-04 Thread wodel youchi
Hi, ceph dashboard fails to listen on all IPs. log_channel(cluster) log [ERR] : Unhandled exception from module 'dashboard' while running on mgr.controllera: OSError("No socket could be created -- (('0.0.0.0', 8443): [Errno -2] Name or service not known) -- (('::', 8443, 0, 0): ceph version 17.2

[ceph-users] ceph osd different size to create a cluster for Openstack : asking for advice

2024-03-13 Thread wodel youchi
Hi, I need some guidance from you folks... I am going to deploy a ceph cluster in HCI mode for an openstack platform. My hardware will be : - 03 control nodes : - 27 osd nodes : each node has 03x3.8To nvme + 01x1.9To nvme disks (those disks will all be used as OSDs) In my Openstack I will be cr

[ceph-users] [REEF][cephadm] new cluster all pg unknown

2024-03-14 Thread wodel youchi
Hi, I am creating a new ceph cluster using REEF. This is my host_specs file [root@controllera config]# cat hosts-specs2.yml service_type: host hostname: computehci01 addr: 20.1.0.2 location: chassis: chassis1 --- service_type: host hostname: computehci02 addr: 20.1.0.3 location: chassis: chassi

[ceph-users] Re: [REEF][cephadm] new cluster all pg unknown

2024-03-14 Thread wodel youchi
0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 157 flags hashpspool,creating stripe_width 0 pg_auto scale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs What am I missing, why PGs won't pair? Regards. Le jeu. 14 mars 2024 à 15:36, wodel youchi a écrit : >

[ceph-users] Re: [REEF][cephadm] new cluster all pg unknown

2024-03-14 Thread wodel youchi
Hi, Note : Firewall is disabled on all hosts. Regards. Le ven. 15 mars 2024 à 06:42, wodel youchi a écrit : > Hi, > > I did recreate the cluster again, and it is the result. > > This is my initial bootstrap > > cephadm --image 192.168.2.36:4000/ceph/ceph:v18 bootstrap

[ceph-users] Re: [REEF][cephadm] new cluster all pg unknown

2024-03-15 Thread wodel youchi
public and private (cluster) networks, I needed to use the --cluster_network option. Maybe I was in over my head, but sometimes it is not that clear. Regards. Le ven. 15 mars 2024 à 07:18, wodel youchi a écrit : > Hi, > > Note : Firewall is disabled on all hosts. > > Regards. >

[ceph-users] Re: [REEF][cephadm] new cluster all pg unknown

2024-03-15 Thread wodel youchi
use *ceph config set mon public_network* to specify my public net, now I have to test my clients if they can connect to it. Regards. Le ven. 15 mars 2024 à 08:20, Stefan Kooman a écrit : > On 15-03-2024 08:10, wodel youchi wrote: > > Hi, > > > > I found my error, it was a misma

[ceph-users] Error deploying Ceph Qunicy using ceph-ansible 7 on Rocky 9

2023-03-08 Thread wodel youchi
Hi, I am trying to deploy Ceph Quincy using ceph-ansible on Rocky9. I am having some problems and I don't know where to search for the reason. PS : I did the same deployment on Rocky8 using ceph-ansible for the Pacific version on the same hardware and it worked perfectly. I have 03 controllers n

[ceph-users] Could you please explain the PG concept

2023-04-25 Thread wodel youchi
Hi, I am learning Ceph and I am having a hard time understanding PG and PG calculus . I know that a PG is a collection of objects, and that PG are replicated over the hosts to respect the replication size, but... In traditional storage, we use size in Gb, Tb and so on, we create a pool from a bu

[ceph-users] RBD mirroring, asking for clarification

2023-05-01 Thread wodel youchi
Hi, When using rbd mirroring, the mirroring concerns the images only, not the whole pool? So, we don't need to have a dedicated pool in the destination site to be mirrored, the only obligation is that the mirrored pools must have the same name. In other words, We create two pools with the same na

[ceph-users] Ceph recovery

2023-05-01 Thread wodel youchi
Hi, When creating a ceph cluster, a failover domain is created, and by default it uses host as a minimal domain, that domain can be modified to chassis, or rack, ...etc. My question is : Suppose I have three osd nodes, my replication is 3 and my failover domain is host, which means that each copy

[ceph-users] Re: Ceph recovery

2023-05-01 Thread wodel youchi
the second and third host comes back the data > will recover. Always best to have an additional host beyond the size > setting for this reason. > > Respectfully, > > *Wes Dillingham* > w...@wesdillingham.com > LinkedIn <http://www.linkedin.com/in/wesleydillingham>

[ceph-users] Re: RBD mirroring, asking for clarification

2023-05-03 Thread wodel youchi
> modifies images or even the entire pool. Why not simply create a > different pool and separate those clients? > > Thanks, > Eugen > > Zitat von wodel youchi : > > > Hi, > > > > When using rbd mirroring, the mirroring concerns the images only, not the

[ceph-users] Re: RBD mirroring, asking for clarification

2023-05-03 Thread wodel youchi
> clarify. Anyway, I would use dedicated pools for rbd mirroring and > then add more pools for different use-cases. > > Regards, > Eugen > > Zitat von wodel youchi : > > > Hi, > > > > Thanks > > I am trying to find out what is the best way to synchr

[ceph-users] [Pacific] Admin keys no longer works I get access denied URGENT!!!

2023-05-31 Thread wodel youchi
Hi, After a wrong manipulation, the admin key no longer works, it seems it has been modified. My cluster is built using containers. When I execute ceph -s I get [root@controllera ceph]# ceph -s 2023-05-31T11:33:20.940+0100 7ff7b2d13700 -1 monclient(hunting): handle_auth_bad_method server allowed

[ceph-users] What is the best way to use disks with different sizes

2023-07-03 Thread wodel youchi
Hi, I will be deploying a Proxmox HCI cluster with 3 nodes. Each node has 3 nvme disks of 3.8Tb each and a 4th nvme disk of 7.6Tb. Technically I need one pool. Is it good practice to use all disks to create the one pool I need, or is it better to create two pools, one on each group of disks? If

[ceph-users] Re: What is the best way to use disks with different sizes

2023-07-04 Thread wodel youchi
lternately, 2 and 4. > > > > On Jul 4, 2023, at 3:44 AM, Eneko Lacunza wrote: > > > > Hi, > > > > El 3/7/23 a las 17:27, wodel youchi escribió: > >> I will be deploying a Proxmox HCI cluster with 3 nodes. Each node has 3 > >> nvme disks of 3.8Tb e

[ceph-users] Does ceph permit the definition of new classes?

2023-07-24 Thread wodel youchi
Hi, Can I define new device classes in ceph, I know that there are hdd, ssd and nvme, but can I define other classes? Regards. Virus-free.www.avast.com