HI all,
Just adding weight - experiencing the same scenario with Squid.
We just set empty certs manually to get things working.
Best,
Laimis J.
> On 28 Jan 2025, at 09:00, Thorsten Fuchs wrote:
>
> We recently migrated our cluster from 18.2.4 to 19.2.0 and started having
> issues with Grafana.
We recently migrated our cluster from 18.2.4 to 19.2.0 and started having
issues with Grafana.
Ceph gives out the warning "CEPHADM_CERT_ERROR: Invalid grafana certificate on
host cc-1: Invalid certificate key: [('PEM routines', '', 'no start line')].
Looking at the certificates they contain a lin
cc @Ankush Behl @Aashish Sharma
^^^
On Tue, Jan 28, 2025 at 12:57 AM Marc wrote:
>
> Is there an existing grafana dashboard/panel that sort of shows the % used
> on disks and hosts?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscr
On 24/1/25 06:45, Stillwell, Bryan wrote:
> ceph report 2>/dev/null | jq '(.osdmap_last_committed -
> .osdmap_first_committed)'
>
> This number should be between 500-1000 on a healthy cluster. I've seen
> this as high as 4.8 million before (roughly 50% of the data stored on
> the cluster ended up
You should put this as a first
By completing this survey, I agree to be contacted by the Ceph User Council. *
Yes I agree.
> I would suggest checking your browser's language settings. I have gotten
> confirmation from others that the form is working okay.
>
> Thanks,
> Laura
>
>
>
>
>
I am quite sure it is not my browser.
>
> I would suggest checking your browser's language settings. I have gotten
> confirmation from others that the form is working okay.
>
> >
> > Can you clarify what you mean here? Is there a problem with the
> survey's
> > language setting
> I reckon that balancing is by far the biggest issue you are
> likely to have because most Ceph releases (I do not know about
> Reef) have difficulty balancing across drives of different
> sizes even with configuration changes.
There were some bugs around Firefly-Hammer with failure domains hav
Hey Marc,
I would suggest checking your browser's language settings. I have gotten
confirmation from others that the form is working okay.
Thanks,
Laura
On Mon, Jan 27, 2025 at 5:46 PM Marc wrote:
>
> >
> > Can you clarify what you mean here? Is there a problem with the survey's
> > language s
>
> Can you clarify what you mean here? Is there a problem with the survey's
> language setting? Not seeing anything wrong on my end, but if there is,
> I'd appreciate it if someone can confirm.
>
I guess it is some bug in this form no idea. Not sure if attachments are
stripped here in the ma
Hi Marc,
Can you clarify what you mean here? Is there a problem with the survey's
language setting? Not seeing anything wrong on my end, but if there is, I'd
appreciate it if someone can confirm.
Thanks,
Laura
On Mon, Jan 27, 2025 at 5:02 PM Marc wrote:
> > https://docs.google.com/forms/d/e/1F
> https://docs.google.com/forms/d/e/1FAIpQLSe66NedXh4gHLgk9G45eqP5V2wHlz4I
> KqRmGUJ074peaTGNKQ/viewform?usp=sf_link
>
FYI I have stuff in polish, and no language switching...
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an emai
Hi,
Looking at that tracker, I see that you were seeing some errors. I don't
really see any errors that stick out to me. When I turn the logs up to 20,
I see the following types of logs over and over:
> ...
> 2025-01-27T19:36:25.989+ 7f657bb17640 20 req 3636357367581543624
> 4066.843017578s s3
Hi all,
Huge thanks to the 46 community members who have already taken the survey!
It's still open, so if you haven't taken it already, follow this link to do
so!
https://docs.google.com/forms/d/e/1FAIpQLSe66NedXh4gHLgk9G45eqP5V2wHlz4IKqRmGUJ074peaTGNKQ/viewform?usp=sf_link
We still plan to keep
Hey Andre,
Clients actually have access to more information than just the
crushmap, which includes temporary PG mappings generated when a
backfill is pending, as well as upmap items which override CRUSH's
placement decision. You can see these in "ceph osd dump", for example.
Josh
On Mon, Jan 27,
Hi all,
Here is a summary of the CSC meeting Jan 27, 2025. Full notes are
available at https://pad.ceph.com/p/csc-weekly-minutes
Component Leads Poll re: Workload and Bottlenecks
Dan proposes an informal poll for component leads to identify workload
distribution, bottlenecks, and areas where com
Is there an existing grafana dashboard/panel that sort of shows the % used on
disks and hosts?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi folks!
The Cephalocon 2024 recordings are available on the YouTube channel!
- Channel: https://www.youtube.com/@Cephstorage/videos
- Cephalocon 2024 playlist:
https://www.youtube.com/watch?v=ECkgu2zZzeQ&list=PLrBUGiINAakPfVfFfPQ5wLMQJFsLKTQCv
Thanks,
Matt
__
Hey Reid,
This sounds similar to what we saw in
https://tracker.ceph.com/issues/62256, in case that helps with your
investigation.
Josh
On Mon, Jan 27, 2025 at 8:07 AM Reid Guyett wrote:
>
> Hello,
>
> We are experiencing slowdowns on one of our radosgw clusters. We restart
> the radosgw daemon
Hello,
We are experiencing slowdowns on one of our radosgw clusters. We restart
the radosgw daemons every 2 hours and things start getting slow after an
hour and a half. The avg get/put latencies go from 20ms/400ms to 1s/5s+
according to the metrics. When I stop traffic to one of the radosgw daemo
Hi Frank,
It would just be great to have confirmation or a "no, its critical".
unfortunately, I'm not able to confirm that, I hope someone else can.
By the way, I have these on more than one rank, so it is probably
not a fall-out of the recent recovery efforts.
In that case I would defini
Hi list,
I have a problem understanding on how crush works when the crush map
changes.
Let's take a pool with some data in it, and a crush map that enables a
client to calculate itself where a particular chunk is stored.
Now we add more OSDs, which means, the crush map changes. Now most
ob
I have a Ceph Reef cluster with 10 hosts with 16 nvme slots
but only half occupied with 15TB (2400 KIOPS) drives. 80
drives in total. I want to add another 80 to fully populate
the slots. The question: What would be the downside if I
expand the cluster with 80 x 30TB (3300
Hi Dev,
The config-history keys related to node3 are expected to remain even for nodes
that you remove, as these keys are not cleaned up upon node deletion.
These keys are used by the 'ceph config log' command to list all configuration
changes that have occurred over time.
The 'ceph status'
23 matches
Mail list logo