[ceph-users] recommendation for buying CEPH appliance

2024-06-18 Thread Steven Vacaroaia
Hi, Could you please recommend a vendor that sells CEPH appliances ? ( preferable a CEPH+PROXMOX) In USA or Canada would be great but , not necessary Among the ones I know are Eurostor ( Germany) 45drives ( Canada) Many thanks Steven ___ ceph-

[ceph-users] ceph rbd iscsi gwcli Non-existent images

2020-08-07 Thread Steven Vacaroaia
Hi, I would appreciate any help/hints to solve this issue iscis (gwcli) cannot see the images anymore This configuration worked fine for many months What changed was that ceph is "nearly full" I am in the process of cleaning it up ( by deleting objects from one of the pools) and I do see reads

[ceph-users] Re: ceph rbd iscsi gwcli Non-existent images

2020-08-10 Thread Steven Vacaroaia
: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Thu Nov 29 13:56:28 2018 On Mon, 10 Aug 2020 at 09:21, Jason Dillaman wrote: > On Fri, Aug 7, 2020 at 2:37 PM Steven Vacaroaia wrote: > > > > Hi, > &

[ceph-users] ceph NFS reef - cephadm reconfigure haproxy

2025-05-05 Thread Steven Vacaroaia
Hi I am testing NFS trying to make sure that, deploying it as below will give me redundancy ceph nfs cluster create main "1 ceph-01,ceph-02" --port 2049 --ingress --virtual-ip 10.90.0.90 For what I have read so far ( and some of the posts on this list) the only way to get the NFS "surviving "

[ceph-users] Ceph reef ingress service - v4v6_flag undefined

2025-05-07 Thread Steven Vacaroaia
Hi, I am unable to deploy ingress service because "v4v6_flag" is undefined I couldn't find any information about this flag The ingress.yaml file used is similar with this one Any help would be greatly appreciated Steven service_type: ingress service_id: rgw placement: hosts: - ceph-nod

[ceph-users] Re: Ceph reef ingress service - v4v6_flag undefined

2025-05-07 Thread Steven Vacaroaia
M src/spawn > M src/spdk > M src/xxHash > Switched to branch 'reef' > Your branch is up to date with 'upstream/reef'. > adking@fedora:~/orch-ceph/ceph/src$ > adking@fedora:~/orch-ceph/ceph/src$ > adking@fedora:~/orch-ceph/ceph/src$ (cd pybind/mgr/cephadm/;

[ceph-users] Re: Ceph reef ingress service - v4v6_flag undefined

2025-05-07 Thread Steven Vacaroaia
only those things are referenced in the the one > you set with the config command. Using 18.2.4 as an example again, you'd > want the template to only reference variables being passed in > https://github.com/ceph/ceph/blob/v18.2.4/src/pybind/mgr/cephadm/services/ingress.py#L176-L

[ceph-users] Re: Ceph reef ingress service - v4v6_flag undefined

2025-05-08 Thread Steven Vacaroaia
excellent resource Many thanks Steven On Thu, 8 May 2025 at 09:34, Anthony D'Atri wrote: > https://docs.ceph.com/en/latest/cephadm/services/monitoring/ May help > > On May 8, 2025, at 9:05 AM, Steven Vacaroaia wrote: > > Hi, > > I thought about that and disab

[ceph-users] Re: Ceph reef ingress service - v4v6_flag undefined

2025-05-08 Thread Steven Vacaroaia
e a way to tell ceph orch to use a specific version ? or is there a way to deploy daemons using a "pulled " version Many thanks Steven On Thu, 8 May 2025 at 05:03, Anthony D'Atri wrote: > Any chance you have a fancy network proxy ? > > On May 8, 2025, at 1:45 AM, Steven Vac

[ceph-users] reef upgrade 2.2 to 2.7 - slow operations in bluestore

2025-05-12 Thread Steven Vacaroaia
Hi, After a cephadm upgrade from 18.2.2 to 18.2.7 that worked perfectly, I am noticing lots ( 42 out of 161 OSDs ) "slow operations in bluestore" errors My cluster has all 3 types of OSDs ( NVME, SSD, HDD+ journaling) I found some articles mentioning below setting but it did not help Anyone els

[ceph-users] Re: reef upgrade 2.2 to 2.7 - slow operations in bluestore

2025-05-13 Thread Steven Vacaroaia
Best, > *Laimis J.* > > On 13 May 2025, at 06:55, yite gu wrote: > > bdev_async_discard_threads more than 1 issue have been fixed at 18.2.7. pls > use `top -H -p ` observe osd thread cpu load, is there any > abnormal situation? And what abnormal logs are there in osd log? >

[ceph-users] CEPH Reef - HDD with WAL and DB on NVME

2025-05-27 Thread Steven Vacaroaia
Hi, I am a bit confused by the documentation as it seems to say that, when using bluestore, there is no need to put the journaling on a faster storage when creating OSD https://docs.ceph.com/en/reef/rados/configuration/osd-config-ref/

[ceph-users] Re: ceph squid - huge difference between capacity reported by "ceph -s" and "ceph df "

2025-06-29 Thread Steven Vacaroaia
Hi Janne Thanks That make sense since I have allocated 196GB for DB and 5 GB for WALL for all 42 spinning OSDs Again, thanks Steveb On Sun, 29 Jun 2025 at 12:02, Janne Johansson wrote: > Den sön 29 juni 2025 kl 17:22 skrev Steven Vacaroaia : > >> Hi, >> >> I just built

[ceph-users] Rocky8 (el8) client for squid 19.2.2

2025-07-17 Thread Steven Vacaroaia
Hi, I noticed there is no client /rpms for Rocky8 (el8) on 19.2.2 repository Would ceph-common for reef allow me to mount cephfs file systems without issues ? Many thanks Steven ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: squid 19.2.2 - troubleshooting pgs in active+remapped+backfill - no pictures

2025-07-11 Thread Steven Vacaroaia
Thanks Anthony changing the scheduler will require to restart all OSDs , right using "ceph orch restart osd " Is this done in a staggering manner or I need to "stagger " them ? Steven On Fri, 11 Jul 2025 at 12:14, Anthony D'Atri wrote: > What you describe sounds like expected behavior. I

[ceph-users] squid 19.2.2 - troubleshooting pgs in active+remapped+backfill - no pictures

2025-07-11 Thread Steven Vacaroaia
Hi, I sent another version of this message with pictures that awaits moderation since it is so big - apologies for that In the meantime I got approval to share the output of some of the command - see attached I have a 19.2.2 cluster deployed with cephadm 7 nodes and 2 networks ( cluster (2 x 1

[ceph-users] Re: ceph squid - huge difference between capacity reported by "ceph -s" and "ceph df "

2025-06-30 Thread Steven Vacaroaia
Ds > > > > Here is the osd tree of one of the servers > all the other 6 are similar > > > > Steven > > > On Sun, 29 Jun 2025 at 14:25, Anthony D'Atri wrote: > >> WAL by default rides along with the DB and rarely warrants a separate or >>

[ceph-users] Re: squid 19.2.2 - discrepancies between GUI and CLI

2025-07-17 Thread Steven Vacaroaia
Thanks for the suggestion Unfortunately "ceph mgr fail" did not solve the issue Is there a better way to "fail" ? Steven On Thu, 17 Jul 2025 at 14:04, Anthony D'Atri wrote: > Try failing the mgr > > > On Jul 17, 2025, at 1:48 PM, Steven Vacaroaia

[ceph-users] Re: Rocky8 (el8) client for squid 19.2.2

2025-07-17 Thread Steven Vacaroaia
gt; > mount -t cephfs... > > just works. > > Best, > Malte > > On 17.07.25 17:23, Steven Vacaroaia wrote: > > Hi, > > > > I noticed there is no client /rpms for Rocky8 (el8) on 19.2.2 repository > > > > Would ceph-common for reef allow me to mo

[ceph-users] Re: squid 19.2.2 - cannot remove 'unknown" OSD

2025-07-21 Thread Steven Vacaroaia
ing else that I can try (other than a reboot) ? Many thanks Steven On Mon, 21 Jul 2025 at 17:25, Anthony D'Atri wrote: > Look at /var/lib/ceph on ceph-host-7 for a leftover directory for osd.8 > > Also try > ceph osd crush remove osd.8 > ceph auth del osd.8 > ceph osd rm

[ceph-users] Re: Newby woes with ceph

2025-07-22 Thread Steven Vacaroaia
Hi Malte why " And do not use Ubuntu 24.04." please ? I just reinstalled my cluster and use 24.04 and 19.2.2. so , if need be, there is still time to redo / reconfigure Steven On Tue, 22 Jul 2025 at 04:05, Malte Stroem wrote: > Hello Stéphane, > > I think, you're mixing and mismatching up a lo

[ceph-users] Squid 19.2.2 - mon_target_pg_per_osd change not applied

2025-07-24 Thread Steven Vacaroaia
Hi, I wanted to increase the number of PG per OSD and did so by using ceph config set global mon_target_pg_per_osd 800 Although the OSD config has the new value , I am still unable to create pools that end up creating more than 250PG per OSD I have restarted the OSDs ...and the monitors ...

[ceph-users] Re: squid 19.2.2 - cannot bootstrap - error writing to /tmp/monmap (21) Is a directory

2025-07-24 Thread Steven Vacaroaia
but with the arguments we pass to > write_tmp while creating the monmap hardcoded in, and an additional bit to > print out the name and content (a blank line) of the file. Keep in mind the > file will get automatically cleaned up when the NamedTemporaryFile goes out > of scope / the scr

[ceph-users] Re: squid 19.2.2 - cannot bootstrap - error writing to /tmp/monmap (21) Is a directory

2025-07-24 Thread Steven Vacaroaia
rapping I’m not sure about the z on the next line, but -v > can be considered like a bind mount of the first path so that container > sees it on the second path. > > Now, as to what happened to /tmp/ceph-tmp, I can’t say. > > > > On Jul 24, 2025, at 10:53 AM, Steven Vaca

[ceph-users] squid 19.2.2 deployed with cephadmin - no grafana data on some dashboards ( RGW, MDS)

2025-07-22 Thread Steven Vacaroaia
Hi, Any of you encounter this issue ? Most of the dashboards works but MDS and RGW specific ones have no data Prometheus ( http:server:9283) does not show anything with RGW or MDS in it so, I am guessing, grafana cannot show what it is not collected :-) ceph-exporter is running Any suggestions/

[ceph-users] Re: squid 19.2.2 - cannot bootstrap - error writing to /tmp/monmap (21) Is a directory

2025-07-24 Thread Steven Vacaroaia
write_tmp while creating the monmap hardcoded in, and an additional bit to > print out the name and content (a blank line) of the file. Keep in mind the > file will get automatically cleaned up when the NamedTemporaryFile goes out > of scope / the script completes. For me it just print

[ceph-users] squid 19.2.2 - RGW performance tuning

2025-07-29 Thread Steven Vacaroaia
Hi, Are there any performance tuning settings for RGW that you can share please ? With the hardware I have, I believe I should be able to "squeeze" more 7 Supermicro dual CPU, 128 cores ( Intel Gold 6530), 1 TB RAM, 2 x 100GB bonded cluster network, 2 x 25 GB bonded private network cephadm, 5 RG

[ceph-users] squid 19.2.2 - osd_memory_target_autotune - best practices when host has lots of RAM

2025-07-31 Thread Steven Vacaroaia
Hi What is the best practice / your expert advice about using osd_memory_target_autotune on hosts with lots of RAM ? My hosts have 1 TB RAM , only 3 NVMEs , 12 HDD and 12 SSD Should I disable autotune and allocate more RAM? I saw some suggestion for 16GB to NVME , 8GB to SSD and 6 to HDD Many

[ceph-users] Re: squid 19.2.2 - osd_memory_target_autotune - best practices when host has lots of RAM

2025-08-01 Thread Steven Vacaroaia
iches -- are these hosts perhaps > converged compute+storage? > > > > > On Jul 31, 2025, at 10:17 AM, Steven Vacaroaia wrote: > > > > Hi > > > > What is the best practice / your expert advice about using > > osd_memory_target_autotune > > on hosts

[ceph-users] Re: squid 19.2.2 - osd_memory_target_autotune - best practices when host has lots of RAM

2025-08-01 Thread Steven Vacaroaia
topic=osds-automatically-tuning-osd-memory > > > https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/#sections-and-masks > > Many thanks > > Steven > > On Thu, 31 Jul 2025 at 10:43, Anthony D'Atri wrote: > >> IMHO the autotuner is awesome. >> >&g

[ceph-users] Re: squid 19.2.2 deployed with cephadmin - no grafana data on some dashboards ( RGW, MDS)

2025-07-31 Thread Steven Vacaroaia
errors in the prometheus and/or ceph-mgr > > log? I'd ignore grafana for now since it only displays what prometheus > > is supposed to collect. To get fresh logs, I would fail the mgr and > > probably restart prometheus as well. > > > > Regards, > > Eugen >