[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
configure the permissions, and determine how to set it up for a cluster, this is a big first step. Thanks Rob -Original Message----- From: Robert W. Eckert Sent: Tuesday, September 3, 2024 7:49 PM To: John Mulligan ; ceph-users@ceph.io Subject: [ceph-users] Re: SMB Service in Squid Than

[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
k order 1 for /var/lib/samba/lock/smbXsrv_tcon_global.tdb smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_BAD_NETWORK_NAME] || at ../../source3/smbd/smb2_tcon.c:151 signed SMB2 message (sign_algo_id=2) Thanks, Rob -----Original Message- From: John Mulligan Sen

[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
mounted in a specific location? -Rob -Original Message- From: John Mulligan Sent: Tuesday, September 3, 2024 4:08 PM To: ceph-users@ceph.io Cc: Robert W. Eckert Subject: Re: [ceph-users] SMB Service in Squid On Tuesday, September 3, 2024 3:42:29 PM EDT Robert W. Eckert wrote: > I h

[ceph-users] SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
I have upgraded my home cluster to 19.1.0 and wanted to try out the SMB orchestration features to improve my hacked SMB shared using CTDB and SMB services on each host. My smb.yaml file looks like service_type: smb service_id: home placement: hosts: - HOST1 - HOST2 - HOST3 -

[ceph-users] Re: ceph orch host drain daemon type

2024-08-29 Thread Robert W. Eckert
If you are using cephadm, couldn't the host be removed from placing osds? On my cluster, I labeled the hosts for each service (OSD/MON/MGR/...) and have the services deployed by label. I believe that if you had that, then when a label is removed from the host the services eventually drain.

[ceph-users] Re: why not block gmail?

2024-06-17 Thread Robert W. Eckert
Is there any way to have a subscription request validated? -Original Message- From: Marc Sent: Monday, June 17, 2024 7:56 AM To: ceph-users Subject: [ceph-users] Re: why not block gmail? I am putting ceph-users@ceph.io on the blacklist for now. Let me know via different email address

[ceph-users] Re: Guidance on using large RBD volumes - NTFS

2024-05-08 Thread Robert W. Eckert
- I have seen it jump around from 500 ms to over 300 seconds back to 3 seconds in a matter of a few refreshes. Windows resource monitor is showing a more consistent response time on multiple parallel writes of about 1.5-3 seconds per write. -Original Message----- From: Robert W. Eck

[ceph-users] Guidance on using large RBD volumes - NTFS

2024-05-07 Thread Robert W. Eckert
Hi - in my home , I have been running cephfs for a few years, and have reasonably good performance, however since exposing cephfs via SMB has been hit and miss.So I thought I could carve out space for a RBD device to share from a windows machine My set up: CEPH 18.2.2 deployed using ceph

[ceph-users] Re: CephFS On Windows 10

2024-02-28 Thread Robert W. Eckert
I have it working on my machines- the global configuration for me looks like [global] fsid = fe3a7cb0-69ca-11eb-8d45-c86000d08867 mon_host = [v2:192.168.2.142:3300/0,v1:192.168.2.142:6789/0] [v2:192.168.2.141:3300/0,v1:192.168.2.141:6789/0] [v2:192.168.2.199:3300/0,v1:192.168.2.1

[ceph-users] Re: Best way to replace Data drive of OSD

2024-01-04 Thread Robert W. Eckert
and one OSD starts to fail as well. I'm just waiting for the replacement drive to arrive. ;-) Regards, Eugen Zitat von "Robert W. Eckert" : > Hi - I have a drive that is starting to show errors, and was wondering > what the best way to replace it is. > > I am on Ceph 18

[ceph-users] Best way to replace Data drive of OSD

2024-01-03 Thread Robert W. Eckert
Hi - I have a drive that is starting to show errors, and was wondering what the best way to replace it is. I am on Ceph 18.2.1, and using cephadm/containers I have 3 hosts, and each host has 4 4Tb drives with a 2 tb NVME device splt amongst them for WAL/DB, and 10 GB Networking. Option 1: S

[ceph-users] Re: v18.2.1 Reef released

2023-12-19 Thread Robert W. Eckert
Yes- I was on Ceph 18.2.0 - I had to update the ceph.repo file in /etc/yum.repos.d to point to 18.2.1 to get the latest ceph client. Mean while the initial pull using --image worked flawlessly, so all my services were updated. - Rob -Original Message- From: Matthew Vernon Sent: Tues

[ceph-users] Re: v18.2.1 Reef released

2023-12-18 Thread Robert W. Eckert
Hi- I tried to start the upgrade using ceph orch upgrade start --ceph-version 18.2.1 Initiating upgrade to quay.io/ceph/ceph:v18:v18.2.1 And checked on the status [root@rhel1 ~]# ceph orch upgrade status { "target_image": "quay.io/ceph/ceph

[ceph-users] Re: Degraded FS on 18.2.0 - two monitors per host????

2023-08-18 Thread Robert W. Eckert
MDS home.hiho.cfuswn MDS version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable) -Original Message- From: Robert W. Eckert Sent: Friday, August 18, 2023 12:48 AM To: ceph-users@ceph.io Subject: [ceph-users] Degraded FS on 18.2.0 - two monitors per host

[ceph-users] Degraded FS on 18.2.0 - two monitors per host????

2023-08-17 Thread Robert W. Eckert
Hi - I have a 4 node cluster, and started to have some odd access issues to my file system "Home" When I started investigating, saw the message "1 MDSs behind on trimming", but I also noticed that I seem to have 2 MDSs running on each server - 3 Daemons up, with 5 standby. Is this expected

[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-28 Thread Robert W. Eckert
Sent: Tuesday, March 28, 2023 3:50 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME Hi, On 28.03.23 05:42, Robert W. Eckert wrote: > > I am trying to add a new server to an existing cluster, but cannot get > the

[ceph-users] Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-27 Thread Robert W. Eckert
Hi, I am trying to add a new server to an existing cluster, but cannot get the OSDs to create correctly When I try Cephadm ceph-volume lvm create, it returns nothing but the container info. [root@hiho ~]# cephadm ceph-volume lvm create --bluestore --data /dev/sdd --block.db /dev/nvme0n1p3 Infer

[ceph-users] Re: CephFS performance

2022-11-23 Thread Robert W. Eckert
Have you tested having the block.db and WAL for each OSD on a faster SSD/NVME device/ partition? I have a bit smaller environment, but was able to take a 2 Tb SSD, split it into 4 partitions and use it for the db and WAL for the 4 Drives. By Default if you move the block.db to a different

[ceph-users] Re: Recovery very slow after upgrade to quincy

2022-08-12 Thread Robert W. Eckert
Interesting, a few weeks ago I added a new disk to each of my 3 node cluster and saw the same 2 Mb/s recovery.What I had noticed was that one OSD was using very high CPU and seems to have been the primary node on the affected PGs.I couldn’t find anything overly wrong with the OSD, networ

[ceph-users] Re: Adding new drives to ceph with ssd DB+WAL

2022-08-02 Thread Robert W. Eckert
] Re: Adding new drives to ceph with ssd DB+WAL Am 30.07.22 um 01:28 schrieb Robert W. Eckert: > Hi - I am trying to add a new hdd to each of my 3 servers, and want to use a > spare ssd partition on the servers for the db+wall. My other OSDs are set > up the same way, but I can't see

[ceph-users] Adding new drives to ceph with ssd DB+WAL

2022-07-29 Thread Robert W. Eckert
Hi - I am trying to add a new hdd to each of my 3 servers, and want to use a spare ssd partition on the servers for the db+wall. My other OSDs are set up the same way, but I can't seem to keep CEPH from creating the OSDs on the drives before I can actually create the OSD I am trying to use th

[ceph-users] Re: Ceph on RHEL 9

2022-06-14 Thread Robert W. Eckert
that as win 😊 -Original Message- From: Gregory Farnum Sent: Friday, June 10, 2022 12:46 PM To: Robert W. Eckert Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: Ceph on RHEL 9 We aren't building for Centos 9 yet, so I guess the python dependency declarations don'

[ceph-users] Re: Ceph on RHEL 9

2022-06-09 Thread Robert W. Eckert
Does anyone have any pointers to install CEPH on Rhel 9? -Original Message- From: Robert W. Eckert Sent: Saturday, May 28, 2022 8:28 PM To: ceph-users@ceph.io Subject: [ceph-users] Ceph on RHEL 9 Hi- I started to update my 3 host cluster to RHEL 9, but came across a bit of a

[ceph-users] Ceph on RHEL 9

2022-05-28 Thread Robert W. Eckert
Hi- I started to update my 3 host cluster to RHEL 9, but came across a bit of a stumbling block. The upgrade process uses the RHEL leapp process, which ran through a few simple things to clean up, and told me everything was hunky dory, but when I kicked off the first server, the server wouldn't

[ceph-users] Re: Question about cephadm, WAL and DB devices.

2022-01-04 Thread Robert W. Eckert
I saw something similar, when I added a block.db on a ssd partition to the OSD.I think the OSD is taking the total size of db + data as the OSD size, and then counting the db as already allocated. From: Daniel Persson Sent: Tuesday, January 4, 2022 12:41

[ceph-users] Re: reallocating SSDs

2022-01-03 Thread Robert W. Eckert
From: Robert W. Eckert Sent: Monday, December 13, 2021 1:00 PM To: ceph-users@ceph.io Subject: [ceph-users] reallocating SSDs Hi- I have a 3 host cluster with 3 HDs and 1 SSD per host. The hosts are on RHEL 8.5, using PODMAN containers deployed via cephadm, with one OSD per HD and SSD. In my cu

[ceph-users] reallocating SSDs

2021-12-13 Thread Robert W. Eckert
Hi- I have a 3 host cluster with 3 HDs and 1 SSD per host. The hosts are on RHEL 8.5, using PODMAN containers deployed via cephadm, with one OSD per HD and SSD. In my current crush map, I have a rule for the SSD and the HDD, and put the cephfs meta data pool and rbd on the ssd pool. >From thin

[ceph-users] Re: cephfs vs rbd

2021-10-08 Thread Robert W. Eckert
That is odd- I am running some game servers (ARK Survival) and the RBD mount starts up in less than a minute, but the CEPHFS mount takes 20 minutes or more. It probably depends on the workload. -Original Message- From: Marc Sent: Friday, October 8, 2021 5:50 PM To: Jorge Garcia ;

[ceph-users] Re: Getting cephadm "stderr:Inferring config" every minute in log - for a monitor that doesn't exist and shouldn't exist

2021-09-20 Thread Robert W. Eckert
"deployed": "2021-09-20T15:46:41.136498Z", "configured": "2021-09-20T15:47:23.002007Z" } As the output. In /var/lib/ceph/mon (not /var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon), there is a link: ceph-rhel1.robeckert.us -> /var/l

[ceph-users] Re: Getting cephadm "stderr:Inferring config" every minute in log - for a monitor that doesn't exist and shouldn't exist

2021-09-20 Thread Robert W. Eckert
y for mon. 9/20/21 10:58:37 AM [INF] Removing daemon mon.rhel1.robeckert.us from rhel1.robeckert.us 9/20/21 10:58:37 AM [INF] Removing monitor rhel1.robeckert.us from monmap... 9/20/21 10:58:37 AM [INF] Safe to remove mon.rhel1.robeckert.us: not in monmap (['rhel1', 'story',

[ceph-users] Getting cephadm "stderr:Inferring config" every minute in log - for a monitor that doesn't exist and shouldn't exist

2021-09-20 Thread Robert W. Eckert
Hi- after the upgrade to 16.2.6, I am now seeing this error: 9/20/21 10:45:00 AM[ERR]cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1.robeckert.us/config ERROR: [Errno 2] No such file or directory: '/var/lib/ceph/fe3a7cb

[ceph-users] Re: No active MDS after upgrade to 16.2.6

2021-09-18 Thread Robert W. Eckert
Thanks- that worked for me. From: 胡 玮文 Sent: Saturday, September 18, 2021 11:02 AM To: Robert W. Eckert ; Ceph Users Subject: 回复: No active MDS after upgrade to 16.2.6 Hi Robert, You may hit the same bug as me. You can read this thread for details https://lists.ceph.io/hyperkitty/list/ceph

[ceph-users] No active MDS after upgrade to 16.2.6

2021-09-18 Thread Robert W. Eckert
Hi - I have a 3 node cluster, and ran the upgrade to 16.2.6 yesterday. All looked like it was going well, but the MDS servers are not coming up Ceph status shows 2 failed daemons and 3 standby. ceph status cluster: id: fe3a7cb0-69ca-11eb-8d45-c86000d08867 health: HEALTH_ERR

[ceph-users] Re: BUG #51821 - client is using insecure global_id reclaim

2021-08-09 Thread Robert W. Eckert
I have had the same issue with the windows client. I had to issue ceph config set mon auth_expose_insecure_global_id_reclaim false Which allows the other clients to connect. I think you need to restart the monitors as well, because the first few times I tried this, I still couldn't co

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-23 Thread Robert W. Eckert
Sorry for so many replies, this time ceph config set mon auth_expose_insecure_global_id_reclaim false seems to have stuck and I can access the ceph drive from windows now. -Original Message- From: Robert W. Eckert Sent: Friday, July 23, 2021 2:30 PM To: Konstantin Shalygin ; Lucian

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-23 Thread Robert W. Eckert
I am seeing the same thing, I think the build is pointing to the default branch, which is still 15.x From: Konstantin Shalygin Sent: Thursday, July 22, 2021 1:41 AM To: Lucian Petrut Cc: Robert W. Eckert ; ceph-users@ceph.io Subject: Re: [ceph-users] ceph-Dokan on windows 10 not working after

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-20 Thread Robert W. Eckert
Petrut Cc: Robert W. Eckert ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific On Tue, Jun 29, 2021 at 4:03 PM Lucian Petrut wrote: > > Hi, > > It’s a compatibility issue, we’ll have to update the Windows Pacific build

[ceph-users] Re: Windows Client on 16.2.+

2021-07-19 Thread Robert W. Eckert
I tried both auth_allow_insecure_global_id_reclaim= false and auth_allow_insecure_global_id_reclaim=true, but get the same errors. I will watch for an updated build. -Original Message- From: Ilya Dryomov Sent: Monday, July 19, 2021 7:59 AM To: Robert W. Eckert Cc: ceph-users@ceph.io

[ceph-users] Windows Client on 16.2.+

2021-07-15 Thread Robert W. Eckert
I would like to directly mount cephfs from the windows client, and keep getting the error below. PS C:\Program Files\Ceph\bin> .\ceph-dokan.exe -l x 2021-07-15T17:41:30.365Eastern Daylight Time 4 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2] 202

[ceph-users] Re: name alertmanager/node-exporter already in use with v16.2.5

2021-07-11 Thread Robert W. Eckert
I had the same issue for Prometheus and Grafana, the same work around worked for both. -Original Message- From: Harry G. Coin Sent: Sunday, July 11, 2021 10:20 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: name alertmanager/node-exporter already in use with v16.2.5 On 7/8/21 5:0

[ceph-users] Re: Error on Ceph Dashboard

2021-06-10 Thread Robert W. Eckert
working again. Thanks, Rob p.s. I do have an extracted image of the container before I did all of this if that would help. From: Ernesto Puerta Sent: Thursday, June 10, 2021 2:44 PM To: Robert W. Eckert Cc: ceph-users Subject: Re: [ceph-users] Error on Ceph Dashboard Hi Robert, I just launch

[ceph-users] Error on Ceph Dashboard

2021-06-09 Thread Robert W. Eckert
Hi - this just started happening in the past few days using Ceph Pacific 16.2.4 via cephadmin (Podman containers) The dashboard is returning No active ceph-mgr instance is currently running the dashboard. A failover may be in progress. Retrying in 5 seconds... And ceph status returns cluster

[ceph-users] Re: Mon crash when client mounts CephFS

2021-06-08 Thread Robert W. Eckert
When I had issues with the monitors, it was access on the monitor folder under /var/lib/ceph//mon./store.db, make sure it is owned by the ceph user. My issues originated from a hardware issue - the memory needed 1.3 v, but the mother board was only reading 1.2 (The memory had the issue, the fir

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-03 Thread Robert W. Eckert
My Cephadm deployment on RHEL8 created a service for each container, complete with restarts. And on the host, the processes run under the 'ceph' user account. The biggest issue I had with running as containers is that the unit.run script generated runs podman -rm ... with the -rm, the logs a

[ceph-users] ceph-Dokan on windows 10 not working after upgrade to pacific

2021-05-14 Thread Robert W. Eckert
Hi- I recently upgraded to pacific, and I am now getting an error connecting on my windows 10 machine: The error is the handle_auth_bad_method, I tried a few combinations of cephx,none on the monitors, but I keep getting the same error. The same config(With paths updated) and key ring works on

[ceph-users] Re: one of 3 monitors keeps going down

2021-04-30 Thread Robert W. Eckert
To: Robert W. Eckert Cc: ceph-users@ceph.io; Sebastian Wagner Subject: Re: [ceph-users] Re: one of 3 monitors keeps going down Have you checked for disk failure? dmesg, smartctl etc. ? Zitat von "Robert W. Eckert" : > I worked through that workflow- but it seems like the one

[ceph-users] one of 3 monitors keeps going down

2021-04-28 Thread Robert W. Eckert
Hi, On a daily basis, one of my monitors goes down [root@cube ~]# ceph health detail HEALTH_WARN 1 failed cephadm daemon(s); 1/3 mons down, quorum rhel1.robeckert.us,story [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s) daemon mon.cube on cube.robeckert.us is in error state [WRN] MON_

[ceph-users] New Ceph cluster- having issue with one monitor

2021-04-21 Thread Robert W. Eckert
Hi, I have pieced together some pcs which I had been using to run a windows DFS cluster. the 3 servers all have 3 4Tb Hard Drives and 1 2Tb SSD, but they have different CPUs All of them are running RHEL8, and have 2.5 Gbps NICs in them. The install was with cephadm, and the ceph processes are