[ceph-users] Re: Getting started with cephfs-top, how to install

2022-10-19 Thread Zach Heise (SSCC)
ng the information I have learned from you and Xiubo, so other relative novices like myself will perhaps be able to fix these issues on their own in the future! Best, Zach Heise On 2022-10-19 2:16 PM, Neeraj Prat

[ceph-users] Re: Getting started with cephfs-top, how to install

2022-10-19 Thread Zach Heise (SSCC)
"read_io_sizes", "write_io_sizes"], "counters": [], "client_metadata": {}, "global_metrics": {}, "metrics": {"delayed_ranks": []}} I activated ceph fs perf stats yesterday, so by this point I shoul

[ceph-users] Getting started with cephfs-top, how to install

2022-10-17 Thread Zach Heise (SSCC)
phfs-top themselves, and know the missing parts of this document that should be added? -- Zach Heise ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Zach Heise (SSCC)
:42,521 7f7c1ef9fb80 DEBUG sestatus: Max kernel policy version:  31 On 2022-06-08 4:30 PM, Eugen Block wrote: Have you checked /var/log/ceph/cephadm.log on the target nodes? Zitat von "Zach Heise (SSCC)" :  Yes, sorry - I tried both 'ceph orch apply mgr "ceph01,ceph03

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Zach Heise (SSCC)
tor-cli-placement-spec> doc for more information, hope it helps. Regards, Dhairya On Thu, Jun 9, 2022 at 1:59 AM Zach Heise (SSCC) wrote: Our 16.2.7 cluster was deployed using cephadm from the start, but now it seems like deploying daemons with it is broken. Running 'cep

[ceph-users] Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Zach Heise (SSCC)
Our 16.2.7 cluster was deployed using cephadm from the start, but now it seems like deploying daemons with it is broken. Running 'ceph orch apply mgr --placement=2' causes '6/8/22 2:34:18 PM[INF]Saving service mgr spec with placement count:2' to appear in the logs, but a 2nd mgr does not get cr

[ceph-users] Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-02-10 Thread Zach Heise (SSCC)
upgrade should proceed automatically. But I?m a little confused. I think if you have only 2 up OSD in a replicate x3 pool, it should in degraded state, and should give you a HEALTH_WARN. ? 2022?2?11??03:06?Zach Heise (SSCC) <he..

[ceph-users] Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-02-10 Thread Zach Heise (SSCC)
022-02-10 12:41 PM, huw...@outlook.com wrote: Hi Zach, How about your min_size setting? Have you checked the number of OSDs in the acting set of every PG is at least 1 greater than the min_size of the corresponding pool? Weiwen Hu ? 2022?2?10??05:02?Zach Heise (SSCC

[ceph-users] Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-02-10 Thread Zach Heise (SSCC)
13.9 29 0 0 0 0 1.22E+08 0 0 180 180 active+clean 2022-02-10T12:44:24.581217+ 8595'180 10088:2478 [33,14,32] 33

[ceph-users] Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-02-10 Thread Zach Heise (SSCC)
2022, at 1:41 PM, Zach Heise (SSCC) wrote: Good afternoon, thank you for your reply. Yes I know you are right, eventually we'll switch to an odd number of mons rather than even. We're still in 'testing' mode right now and only my coworkers and I are using the cluster. Of the 7 p

[ceph-users] Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-02-09 Thread Zach Heise (SSCC)
last two are EC 2+2. Zach Heise On 2022-02-09 3:38 PM, sascha.art...@gmail.com wrote: Hello, all your pools running replica > 1? also having 4 monitors is pretty bad for split brain situations.. Zach Heise (SSCC) schrieb am Mi., 9. Feb. 2022, 22:02: Hello, ceph health detail sa

[ceph-users] Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-02-09 Thread Zach Heise (SSCC)
Hello, ceph health detail says my 5-node cluster is healthy, yet when I ran ceph orch upgrade start --ceph-version 16.2.7 everything seemed to go fine until we got to the OSD section, now for the past hour, every 15 seconds a new log entry of  'Upgrade: unsafe to stop osd(s) at this time (1 P

[ceph-users] Re: Is it normal for a orch osd rm drain to take so long?

2021-12-02 Thread Zach Heise (SSCC)
f anything looks abnormal? David On Thu, Dec 2, 2021 at 1:20 PM Zach Heise (SSCC) <he...@ssc.wisc.edu> wrote: Good morning David, Assuming you need/want to see the d

[ceph-users] Re: Is it normal for a orch osd rm drain to take so long?

2021-12-02 Thread Zach Heise (SSCC)
Zach On 2021-12-01 5:20 PM, David Orman wrote: What's "ceph osd df" show? On Wed, Dec 1, 2021 at 2:20 PM Zach Heise (SSCC) <he...@ssc.wisc.edu> wrote:

[ceph-users] Is it normal for a orch osd rm drain to take so long?

2021-12-01 Thread Zach Heise (SSCC)
I wanted to swap out on existing OSD, preserve the number, and then remove the HDD that had it (osd.14 in this case) and give the ID of 14 to a new SSD that would be taking its place in the same node. First time ever doing this, so not sure what to expect. I followed

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-26 Thread Zach Heise (SSCC)
Good afternoon Kai, I think I missed this email originally when you sent it. I think that, due to how reliably this issue happens, that this seems to be unlikely to be an issue with the mgr daemon going down. Ernesto - at this point, are there any other debug logs I can provide that would giv

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-19 Thread Zach Heise (SSCC)
Spot on, Ernesto - my output looks basically identical: curl -kv https://144.92.190.200:8443 * Rebuilt URL to: https://144.92.190.200:8443/ *   Trying 144.92.190.200... * TCP_NODELAY set * Connected to 144.92.190.200 (144.92.190.200) port 8443 (#0

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-19 Thread Zach Heise (SSCC)
be interfering with this? Kind Regards, Ernesto On Fri, Nov 19, 2021 at 12:04 AM Zach Heise (SSCC) <he...@ssc.wisc.edu>

[ceph-users] Dashboard's website hangs during loading, no errors

2021-11-18 Thread Zach Heise (SSCC)
Hello!   Our test cluster is a few months old, was initially set up from scratch with Pacific and has now had two separate small patches 16.2.5 and then a couple weeks ago, 16.2.6 applied to it. The issue I?m describing has been present sin

[ceph-users] Re: Grafana embed in dashboard no longer functional

2021-11-04 Thread Zach Heise (SSCC)
Kind Regards, Ernesto On Thu, Nov 4, 2021 at 8:30 PM Zach Heise (SSCC) <he...@ssc.wisc.edu> wrote: We're using cephadm with all 5 node

[ceph-users] Re: fresh pacific installation does not detect available disks

2021-11-04 Thread Zach Heise
Hi Carsten, When I had problems on my physical hosts (recycled systems that we wanted to just use in a test cluster) I found that I needed to use sgdisk --zap-all /dev/sd{letter} to clean all partition maps off the disks before ceph would recognize them as available. Worth a shot in your case, eve

[ceph-users] Grafana embed in dashboard no longer functional

2021-11-04 Thread Zach Heise (SSCC)
We're using cephadm with all 5 nodes on 16.2.6. Until today, grafana has been running only on ceph05. Before the 16.2.6 update, the embedded frames would pop up an expected security error for self-signed certificates, but after accepting would work. After the 16.2

[ceph-users] Re: we're living in 2005.

2021-08-06 Thread Zach Heise (SSCC)
There's reddit - https://old.reddit.com/r/ceph/ - that's what I've been using for months now to get my cluster set up. Zach Heise Social Sciences Computing Cooperative Work and Contact Information <http://ssc.wisc.edu/sscc/staff/Heise.htm> On 2021-08-06 8:29 AM, j...@c

[ceph-users] Re: setting cephfs quota with setfattr, getting permission denied

2021-08-03 Thread Zach Heise (SSCC)
Hi Tim! I probably should have made sure to clarify but I am doing these setfattr commands on one of the ceph nodes themselves, that I mounted the cephFS on, running as root. Everything right now is being done as root - mounted the whole cephFS volume as root, mkdir temp30days as root, and now

[ceph-users] setting cephfs quota with setfattr, getting permission denied

2021-08-03 Thread Zach Heise (SSCC)
We're preparing to make our first cephfs volume visible to our users for testing, as a scratch dir. We'd like to set a quota on the /temp30days folder, so I modified my MDS caps to be 'rwp' as described here