[ceph-users] cephadm (curl master)/15.2.9:: how to add orchestration

2021-03-11 Thread Adrian Sevcenco
Hi! After an initial bumpy bootstrapping (IMHO the defaults should be whatever is already defined in .ssh of the user and custom values setup with cli arguments) now i'm stuck adding any service/hosts/osds because apparently i lack orchestration .. the the documentation show a big "Page does no

[ceph-users] Re: cephadm (curl master)/15.2.9:: how to add orchestration

2021-03-11 Thread Adrian Sevcenco
On 3/11/21 3:07 PM, Sebastian Wagner wrote: Hi Adrian, Hi! Am 11.03.21 um 13:55 schrieb Adrian Sevcenco: Hi! After an initial bumpy bootstrapping (IMHO the defaults should be whatever is already defined in .ssh of the user and custom values setup with cli arguments) now i'm stuck addin

[ceph-users] NVME pool creation time :: OSD services strange state

2021-03-11 Thread Adrian Sevcenco
Hi! So, after i selected the tags to add 2 nvme ssds i declared a replicated n=2 pool .. and for the last 30 min the progress shown in notification is 0% and iotop shows around 100K/s for 2 (???) ceph-mon processes and that all ... and in my service list the osd services look somehow empty: ht

[ceph-users] Re: NVME pool creation time :: OSD services strange state

2021-03-11 Thread Adrian Sevcenco
On 3/11/21 4:45 PM, Adrian Sevcenco wrote: Hi! So, after i selected the tags to add 2 nvme ssds i declared a replicated n=2 pool .. and for the last 30 min the progress shown in notification is 0% and iotop shows around 100K/s for 2 (???) ceph-mon processes and that all ... and in my service

[ceph-users] Re: NVME pool creation time :: OSD services strange state - SOLVED

2021-03-11 Thread Adrian Sevcenco
On 3/11/21 5:01 PM, Adrian Sevcenco wrote: On 3/11/21 4:45 PM, Adrian Sevcenco wrote: Hi! So, after i selected the tags to add 2 nvme ssds i declared a replicated n=2 pool .. and for the last 30 min the progress shown in notification is 0% and iotop shows around 100K/s for 2 (???) ceph-mon

[ceph-users] ceph boostrap initialization :: nvme drives not empty after >12h

2021-03-12 Thread Adrian Sevcenco
Hi! yesterday i bootstrapped (with cephadm) my first ceph installation and things looked somehow ok .. but today the osds are not yet ready and i have in dashboard this warnings: MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs PG_AVAILABILITY: Reduced data availability: 64 pgs inactive PG_

[ceph-users] Re: ceph boostrap initialization :: nvme drives not empty after >12h

2021-03-12 Thread Adrian Sevcenco
On 3/12/21 12:31 PM, Eneko Lacunza wrote: Hi Adrian, Hi! El 12/3/21 a las 11:26, Adrian Sevcenco escribió: Hi! yesterday i bootstrapped (with cephadm) my first ceph installation and things looked somehow ok .. but today the osds are not yet ready and i have in dashboard this warnings

[ceph-users] Re: ceph boostrap initialization :: nvme drives not empty after >12h

2021-03-12 Thread Adrian Sevcenco
i will try also external clients but of course i will be capped to the theoretical 120 MiB/s which i'm curios if i can touch (9k mtu)) Thanks!! Adrian A> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10 *From: *Adrian Sevcenco <mailto:adrian.s

[ceph-users] howto:: emergency shutdown procedure and maintenance

2021-03-18 Thread Adrian Sevcenco
Hi! What steps/procedures are required for emergency shutdown and for machine maintenance? Thanks a lot! Adrian smime.p7s Description: S/MIME Cryptographic Signature ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: howto:: emergency shutdown procedure and maintenance

2021-03-19 Thread Adrian Sevcenco
d? Thanks a lot again :) Adrian If you scripted it, there’s probably not a lot of difference in the time take to shutdown. A. Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10 *From: *Adrian Sevcenco <mailto:adrian.sevce...@cern.ch> *Sent: *18 March

[ceph-users] cephadm/podman :: upgrade to pacific stuck

2021-04-01 Thread Adrian Sevcenco
Hi! I have a single machine ceph installation and after trying to update to pacific the upgrade is stuck with: ceph -s cluster: id: d9f4c810-8270-11eb-97a7-faa3b09dcf67 health: HEALTH_WARN Upgrade: Need standby mgr daemon services: mon: 1 daemons, quorum sev.spacescience.ro (age 3w) mgr: sev

[ceph-users] Re: cephadm/podman :: upgrade to pacific stuck

2021-04-01 Thread Adrian Sevcenco
ank you! Adrian when the primary is restarted. I suspect you would then run into the same thing with the mon. All sorts of things tend to crop up on a cluster this minimal. On Apr 1, 2021, at 10:15 AM, Adrian Sevcenco wrote: Hi! I have a single machine ceph installation and after tryi

[ceph-users] cephadm:: how to change the image for services

2021-04-05 Thread Adrian Sevcenco
Hi! How/where can i change the image configured for a service? I tried to modify /var/lib/ceph///unit.{image,run} but after restarting ceph orch ps shows that the service use the same old image. What other configuration locations are there for the ceph components beside /etc/ceph (which is quite

[ceph-users] Re: cephadm:: how to change the image for services

2021-04-05 Thread Adrian Sevcenco
On 4/5/21 3:27 PM, 胡 玮文 wrote: 在 2021年4月5日,19:29,Adrian Sevcenco 写道: Hi! How/where can i change the image configured for a service? I tried to modify /var/lib/ceph///unit.{image,run} but after restarting ceph orch ps shows that the service use the same old image. Hi Adrian, Hi! Try

[ceph-users] Re: cephadm/podman :: upgrade to pacific stuck

2021-04-08 Thread Adrian Sevcenco
IMAGE then again the single image) Thanks a lot!! Adrian Supporting automated single-node upgrades is high on the list.. we hope to have it fixed soon. s On Thu, Apr 1, 2021 at 1:24 PM Adrian Sevcenco wrote: On 4/1/21 8:19 PM, Anthony D'Atri wrote: I think what it’s saying is that i

[ceph-users] cephfs:: store files on different pools?

2021-05-27 Thread Adrian Sevcenco
Hi! is is (technically) possible to instruct cephfs to store files < 1Mib on a (replicate) pool and the others files on another (ec) pool? And even more, is it possible to take the same kind of decision on the path of the file? (let's say that critical files with names like r"/critical_path/cri

[ceph-users] crushmap rules :: host selection

2024-01-27 Thread Adrian Sevcenco
Hi! I'm new with ceph and i struggle to make a mapping between my current storage knowledge and ceph... So, i will state my understanding of the context and the question so please correct me with anything that i got wrong :) So, files (or pieces of files) are put in PGs that are given sections o

[ceph-users] Re: crushmap rules :: host selection

2024-01-28 Thread Adrian Sevcenco
Original Message Subject: [ceph-users] crushmap rules :: host selection From: Anthony D'Atri To: Adrian Sevcenco Date: 1/28/2024, 3:56:21 AM First a all, thanks a lot for for info and taking time to help a beginner :) Pools are a logical name for a storage space bu

[ceph-users] Re: crushmap rules :: host selection

2024-01-28 Thread Adrian Sevcenco
Original Message Subject: [ceph-users] Re: crushmap rules :: host selection From: Anthony D'Atri To: Adrian Sevcenco Date: 1/28/2024, 6:03:21 PM First a all, thanks a lot for for info and taking time to help a beginner :) Nichts zu denken. This is a community, it’s

[ceph-users] Re: crushmap rules :: host selection

2024-01-28 Thread Adrian Sevcenco
Original Message Subject: [ceph-users] crushmap rules :: host selection From: Anthony D'Atri To: Adrian Sevcenco Date: 1/28/2024, 11:34:00 PM so it depends on failure domain .. but with host failure domain, if there is space on some other OSDs will the missing OS