Hi,
Am 02.09.20 um 23:17 schrieb Dimitri Savineau:
> Did you try to restart the dashboard mgr module after your change ?
>
> # ceph mgr module disable dashboard
> # ceph mgr module enable dashboard
Yes, I should have mentioned that. No effect, though.
Regards
--
Robert Sander
Heinlein Support
Hi all,
We had a stuck mds ops this morning on a 14.2.11 cephfs cluster. I
tried to ls the path from another client and that blocked.
The ops were like this:
# egrep 'desc|flag|age' ops.txt
"description": "client_request(client.1212755100:37475
lookup #0x1003e229d38/analytics-logs 20
On Thu, Sep 3, 2020 at 10:25 AM Stefan Kooman wrote:
>
> On 2020-09-03 09:21, Dan van der Ster wrote:
>
> > Any ideas what might have triggered this?
>
> This looks like issue: https://tracker.ceph.com/issues/42338
>
> Do you use snapshots on this fs?
We don't use snapshots, but *maybe* sometime
I think you might need to set some headers. Here is what we use
(connecting to Swift, but should be generally applicable). We are
running nginx and swift (swift proxy server) on the same host. but again
maybe some useful ideas for you to try (below).
Note that we explicitly stop nginx writing
Hi,
Yep I think the header is the cause too. I modify the configuration but it
still gets 403 error,
which I consider that the header may not be transferred to the backends.
But if I set it to level 4 rather than level 7, nginx works well.
Mark Kirkwood 于2020年9月3日周四 下午12:53写道:
> I think you mig
Hi Robert,
The host where the browser is on needs to be able to reach the Grafana instance
by `ceph01` hostname (maybe add an entry to /etc/hosts?).
There was a fix [1] that allows custom Grafana URL (so the cephadm doesn't
override it).
It's backported, maybe the version you are using doesn't
Hi,
Am 03.09.20 um 11:57 schrieb Ni-Feng Chang:
>
> The host where the browser is on needs to be able to reach the Grafana
> instance by `ceph01` hostname (maybe add an entry to /etc/hosts?).
Yes, that is my intermediate solution. But I cannot tell my trainees
next week that they have to modify
On 2020-09-02 23:50, Wido den Hollander wrote:
>
> Indeed, it shouldn't be.
>
> This config option should make it easier in a future release:
> https://github.com/ceph/ceph/commit/93e4c56ecc13560e0dad69aaa67afc3ca053fb4c
>
>
> [osd]
> osd_compact_on_start = true
>
> Then just restart the OSDs
In theory it should be possible to do this (to change the Block SSD Write Disk
Cache Change = Yes setting)
1. Run MegaSCU -adpsettings -write -f mfc.ini -a0
2. Edit the mfc.ini file, setting "blockSSDWriteCacheChange" to 0 instead of
1.
3. Run MegaSCU -adpsettings -read -f mfc.ini -a0
W
On 2020-09-03 09:21, Dan van der Ster wrote:
> Any ideas what might have triggered this?
This looks like issue: https://tracker.ceph.com/issues/42338
Do you use snapshots on this fs?
Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
One of the unique features of our portal is our 24/7 live chat support
platform. Students from any USA location can use this facility to directly
receive essay assignment tips from our easy essay writer.
https://myassignmenthelp.com/us/cheap-essay-writing-services.html
___
Hello people,
I am trying to change the cluster network in a production ceph. I'm having
problems, after changing the ceph.conf file and restarting a osd the cluster is
always going to HEALTH_ERROR with blocked requests. Only by returning to the
previous configuration and restarting the same
On 9/3/20 3:38 PM, pso...@alticelabs.com wrote:
> Hello people,
>I am trying to change the cluster network in a production ceph. I'm having
> problems, after changing the ceph.conf file and restarting a osd the cluster
> is always going to HEALTH_ERROR with blocked requests. Only by return
Hi Wido,
Out of curiosity, did you ever work out how to do this?
Cheers, Dan
On Tue, Feb 12, 2019 at 6:17 PM Wido den Hollander wrote:
>
> Hi,
>
> I've got a situation where I need to split a Ceph cluster into two.
>
> This cluster is currently running a mix of RBD and RGW and in this case
> I
Hi,
Last night I've spend a couple of hours debugging a issue where OSDs
would be marked as 'up', but then PGs stayed in the 'peering' state.
Looking through the admin socket I saw these OSDs were in the 'booting'
state.
Looking at the OSDMap I saw this:
osd.3 up in weight 1 up_from 26 up_th
On 9/3/20 3:55 PM, Dan van der Ster wrote:
> Hi Wido,
>
> Out of curiosity, did you ever work out how to do this?
Nope, never did this. So there are two clusters running with the same
fsid :-)
Wido
>
> Cheers, Dan
>
> On Tue, Feb 12, 2019 at 6:17 PM Wido den Hollander wrote:
>>
>> Hi,
>>
Well, it sounds like the pdcache setting may not be possible for SSD's, which
is the first I've ever heard of this.
I actually just checked another system that I forgot was behind a 3108
controller with SSD's (not ceph, so wasn't considering it).
It looks like I ran into the same issue during co
Here is a link for iSCSI/RBD implementation guide from SUSE for this year for
vmware (Hyper-v should be similar)
https://www.suse.com/media/guide/suse-enterprise-storage-implementation-guide-for-vmware-esxi-guide.pdf
We've been running rbd/iscsi for 4 years
Thanks Joe
>>> Salsa 9/2/2020 3:0
Salsa
Again the doc shows and we have used layering only as a feature for
iSCSI
Further down it gives you specific settings for the luns/images
In our case we let vmware/veeam snapshot and make copies of our VMs
There is a new Beta of SES that bypasses the iscsi gateways for Windows
servers
W
Joe,
sorry, I should have been clearer. The incompatible rbd features are
exclusive-lock, journaling, object-map and such.
The info comes from here:
https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html
--
Salsa
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On
Hello,
On 9/1/20 10:02 PM, Ragan, Tj (Dr.) wrote:
Does anyone know how to get the actual block size used by an osd? I’m trying
to evaluate 4k vs 64k min_alloc_size_hdd and want to verify that the newly
created osds are actually using the expected block size.
ceph osd metadata osd. | jq '.bl
Hi there,
we reconfigured our ceph cluster yesterday to remove the cluster
network and things didn't quite go to plan. I am trying to figure out
what went wrong and also what to do next.
We are running nautilus 14.2.10 on Scientific Linux 7.8.
So, we are using a mixture of RBDs and cephfs. For th
22 matches
Mail list logo