Hi Frank,
That option is set to false (I didnt enabled security for the monitoring
stack).
Kind regards,
Sake
> Op 04-09-2024 20:17 CEST schreef Frank de Bot (lists) :
>
>
> Hi Sake,
>
> Do you have the config mgr/cephadm/secure_monitoring_stack to true? If
> so, this pull request will f
Another update: Giovanna agreed to switch back to mclock_scheduler and
adjust osd_snap_trim_cost to 400K. It looks very promising, after a
few hours the snaptrim queue was processed.
@Sridhar: thanks a lot for your valuable input!
Zitat von Eugen Block :
Quick update: we decided to switch t
Upgrade suites approved:
https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-httpstrackercephcomissues67779
On Wed, Sep 4, 2024 at 12:02 PM Adam King wrote:
> orch approved, failures are known issues and not release blockers
>
> On Fri, Aug 30, 2024 at 10:43 AM Yuri Weinstein
> wrote:
>
>
Hi Sake,
Do you have the config mgr/cephadm/secure_monitoring_stack to true? If
so, this pull request will fix your problem:
https://github.com/ceph/ceph/pull/58402
Regards,
Frank
Sake Ceph wrote:
After the upgrade from 17.2.7 to 18.2.4 a lot of graphs are empty. For example
the Osd laten
After the upgrade from 17.2.7 to 18.2.4 a lot of graphs are empty. For example
the Osd latency under OSD device details or the Osd Overview has a lot of No
data messages.
I deployed ceph-exporter on all hosts, am I missing something? Did even a
redeploy of prometheus.
Kind regards,
Sake
_
orch approved, failures are known issues and not release blockers
On Fri, Aug 30, 2024 at 10:43 AM Yuri Weinstein wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67779#note-1
>
> Release Notes - TBD
> Gibba upgrade -TBD
> LRC upgrade - TBD
>
> It was dec
rgw — approved
Eric
(he/him)
> On Aug 30, 2024, at 10:42 AM, Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67779#note-1
>
> Release Notes - TBD
> Gibba upgrade -TBD
> LRC upgrade - TBD
>
> It was decided and agreed upon that there
One of my major regrets is that there isn't a "Ceph Lite" for setups
where you want a cluster with "only" a few terabytes and a half-dozen
servers. Ceph excels at really, really big storage and the tuning
parameters reflect that.
I, too ran into the issue where I couldn't allocate a disk partition
I've been monitoring my Ceph LAN segment for the last several hours and
absolutely no traffic has shown up on any server for port 8765.
Furthermore I did a quick review of Prometheus itself and it's only
claiming those 9000-series ports I mentioned previously.
So I conclude that this isn't litera
Hi,
I tracked it down to 2 issues:
* our ipv6-only deployment (a bug fixed in 18.2.4, though that has buggy
.debs)
* Discovery service is only run on the active mgr
The latter point is surely a bug? Isn't the point of running a service
discovery endpoint that one could point e.g. an externa
Hello Eugenio,
All previous "it just hangs" issues that I have seen previously were
down to some network problem. Please check that you can ping all OSDs,
MDSs, and MONs from the client. Please retest using large pings (ping
-M dont -s 8972 192.168.12.34). Please inspect firewalls. If multiple
net
Hi,
apparently, I was wrong about specifying a partition in the path
option of the spec file. In my quick test it doesn't work either.
Creating a PV, VG, LV on that partition makes it work:
ceph orch daemon add osd soc9-ceph:data_devices=ceph-manual-vg/ceph-osd
Created osd(s) 3 on host 'soc
> Has it worked before or did it just stop working at some point? What's the
> exact command that fails (and error message if there is)?
It was working using the NFS gateway, I never tried with the Ceph FUSE mount.
The command is ceph-fuse --id migration /mnt/repo. No error message, it just
han
Added a bug for it. https://tracker.ceph.com/issues/67889
Am Mi., 4. Sept. 2024 um 11:31 Uhr schrieb Boris :
> I think I've found what it is.
>
> I you just call the rgw, without any bucket name or authentication you
> will end up with these logs.
> Now is the question, is this a bug, because you
I think I've found what it is.
I you just call the rgw, without any bucket name or authentication you will
end up with these logs.
Now is the question, is this a bug, because you can not read it? And how
can I disable it?
Cheers
Boris
Am Di., 3. Sept. 2024 um 14:12 Uhr schrieb Boris :
> I am n
Has it worked before or did it just stop working at some point? What's
the exact command that fails (and error message if there is)?
For the "too many PGs per OSD" I suppose I have to add some other
OSDs, right?
Either that or reduce the number of PGs. If you had only a few pools
I'd sugg
Hi Eugen,
Sorry, but I had some trouble when I signed up and then I was away so I missed
your reply.
> ceph auth export client.migration
> [client.migration]
> key = redacted
> caps mds = "allow rw fsname=repo"
> caps mon = "allow r fsname=repo"
> caps osd = "allow
Hi, I already responded to your first attempt:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/GS7KJRJP7BAOF66KJM255G27TJ4KG656/
Please provide the requested details.
Zitat von Eugenio Tampieri :
Hello,
I'm writing to troubleshoot an otherwise functional Ceph quincy
cluste
Hello,
I'm writing to troubleshoot an otherwise functional Ceph quincy cluster that
has issues with cephfs.
I cannot mount it with ceph-fuse (it gets stuck), and if I mount it with NFS I
can list the directories but I cannot read or write anything.
Here's the output of ceph -s
cluster:
id:
19 matches
Mail list logo