Hi everyone,
If you're a contributor to Ceph, please join us for the next summit on
April 12-22nd. The topics range from the different components,
Teuthology, and governance.
Information such as schedule and etherpads can be found on the blog
post or directly on the main etherpad:
https://pad.ce
Hi Frank,
in fact this parameter impacts OSD behavior at both build-time and
during regular operationing. It simply substitutes hdd/ssd
auto-detection with manual specification. And hence relevant config
parameters are applied. If e.g. min_alloc_size is persistent after OSD
creation - it wou
On Tue, Apr 5, 2022 at 7:44 AM Venky Shankar wrote:
>
> Hey Josh,
>
> On Tue, Apr 5, 2022 at 4:34 AM Josh Durgin wrote:
> >
> > Hi Venky and Ernesto, how are the mount fix and grafana container build
> > looking?
>
> Currently running into various teuthology related issues when testing
> out the
Some more information about our issue (I work with Wissem).
As the OSD are crashing only on one node, we focus on it.
We found that it's the only node where we also see taht kind of error in
the OSD logs :
2022-04-08T11:38:26.464+0200 7fadaf877700 0 bad crc in data 3052515915
!= exp 38845088
Hi all,
The Grafana container fix is ready to be merged into Quincy:
https://github.com/ceph/ceph/pull/45799
So that would be all from the Dashboard + monitoring for Quincy.
Kind Regards,
Ernesto
On Tue, Apr 5, 2022 at 9:25 PM Dan Mick wrote:
> On 4/5/2022 2:47 AM, Ernesto Puerta wrote:
> >
Thank you for your quick investigation, the formatting in mails is not great,
OMAP is only 2.2GiB, the 8.3 TiB is AVAIL
> On 8. Apr 2022, at 10:18, Janne Johansson wrote:
>
> Den fre 8 apr. 2022 kl 10:06 skrev Hendrik Peyerl :
>>
>> ID CLASS WEIGHTREWEIGHT SIZE RAW USE DATA O
Den fre 8 apr. 2022 kl 10:06 skrev Hendrik Peyerl :
>
> ID CLASS WEIGHTREWEIGHT SIZE RAW USE DATA OMAP META
> AVAIL%USE VAR PGS STATUS TYPE NAME
> -1 48.0 - 29 TiB 20 TiB 18 TiB 2.2 GiB 324 GiB
> 8.3 TiB 70.97 1.00- r
ID CLASS WEIGHTREWEIGHT SIZE RAW USE DATA OMAP META
AVAIL%USE VAR PGS STATUS TYPE NAME
-1 48.0 - 29 TiB 20 TiB 18 TiB 2.2 GiB 324 GiB
8.3 TiB 70.97 1.00- root default
-15 16.0 - 9.6 TiB 6.8 TiB
How does "ceph osd df tree" look?
Den fre 8 apr. 2022 kl 09:58 skrev Hendrik Peyerl :
>
> My Screenshot didn’t make it into the mail, this is the output of ceph df:
>
> --- RAW STORAGE ---
> CLASSSIZEAVAILUSED RAW USED %RAW USED
> hdd29 TiB 8.3 TiB 20 TiB20 TiB 70.95
>
My Screenshot didn’t make it into the mail, this is the output of ceph df:
--- RAW STORAGE ---
CLASSSIZEAVAILUSED RAW USED %RAW USED
hdd29 TiB 8.3 TiB 20 TiB20 TiB 70.95
TOTAL 29 TiB 8.3 TiB 20 TiB20 TiB 70.95
--- POOLS ---
POOLID
Hi everyone,
I have a strange issue and I can’t figure out what seems to be blocking my disk
space:
I have a total of 29 TB (48x 600GB) with a replication of 3, which results in
around 9,6TB of „real“ storage space.
I am currently only using it mainly for the RGW. I have a total of around 3T
Hi,
thanks for your explanation, Josh. I think In understand now how
mon_max_pg_per_osd could have an impact here. The default seems to be
250. Each OSD currently has around 100 PGs, is this a potential
bottleneck? In my test cluster I have around 150 PGs per OSD and
couldn't reproduce it
12 matches
Mail list logo