Hi,
El 15/6/24 a las 11:49, Marc escribió:
If you don't block gmail, gmail/google will never make an effort to clean up
their shit. I don't think people with a gmail.com will mind, because this is
free and get somewhere else a free account.
tip: google does not really know what part of their
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Could we at least stop approving requests from obvious spammers?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eneko Lacunza
Sent: Monday, June 17, 2024 9:18 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: w
I am putting ceph-users@ceph.io on the blacklist for now. Let me know via
different email address when it is resolved.
>
> Could we at least stop approving requests from obvious spammers?
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> __
On Mon, Jun 17, 2024 at 12:18 AM Satoru Takeuchi
wrote:
>
> 2024年6月14日(金) 23:24 Anthony D'Atri :
>
> > Usually. There is a high bar for changing command structure or output.
> > Newer versions are more likely to *add* commands and options than to change
> > or remove them.
> >
> > That said, prob
Call Girls In Aiims Metro,,💘((9711106444))💘(DELHI) Escorts Service///
100% GENUINE SERVICE WITH HI FI GIRLS IN DELHI*NCR LOW COST CHEAP RATE IN
SHORT 15OO NIGHT 6OOO FULL SERVICE ALL DELHI NCR We Have All Types Of
Models Students Working Girls House Wives Etc..Call Me 9711106444
We Provide All Typ
Call Girls In Sangam Vihar,💘((9711106444))💘(DELHI) Escorts Service///
100% GENUINE SERVICE WITH HI FI GIRLS IN DELHI*NCR LOW COST CHEAP RATE IN
SHORT 15OO NIGHT 6OOO FULL SERVICE ALL DELHI NCR We Have All Types Of
Models Students Working Girls House Wives Etc..Call Me 9711106444
We Provide All Typ
Is there any way to have a subscription request validated?
-Original Message-
From: Marc
Sent: Monday, June 17, 2024 7:56 AM
To: ceph-users
Subject: [ceph-users] Re: why not block gmail?
I am putting ceph-users@ceph.io on the blacklist for now. Let me know via
different email address
Yes. I have admin juice on some other Ceph lists, I've asked for it here as
well so that I can manage with alacrity.
> On Jun 17, 2024, at 09:31, Robert W. Eckert wrote:
>
> Is there any way to have a subscription request validated?
>
> -Original Message-
> From: Marc
> Sent: Mond
Hi community!
Recently we had a major outage in production and after running the
automated ceph recovery, some PGs remain in "incomplete" state, and IO
operations are blocked.
Searching in documentation, forums, and this mailing list archive, I
haven't found yet if this means this data is recover
Hi Pablo,
Could you tell us a little more about how that happened?
Do you have a min_size >= 2 (or E/C equivalent) ?
Cordialement,
*David CASIER*
Le lun. 17 juin 2024 à 16:26, c
Hi Pablo,
It depends. If it’s a replicated setup, it might be as easy as marking dead
OSDs as lost to get the PGs to recover. In that case it basically just means
that you are below the pools min_size.
If it is an EC setup, it might be quite a bit more painful, depending on what
happened to t
Ah scratch that, my first paragraph about replicated pools is actually
incorrect. If it’s a replicated pool and it shows incomplete, it means the most
recent copy of the PG is missing. So ideal would be to recover the PG from dead
OSDs in any case if possible.
Matthias Grandl
Head Storage Engin
Hi everyone,
Thanks for your kind responses
I know the following is not the best scenario, but sadly I didn't have the
opportunity of installing this cluster
More information about the problem:
* We use replicated pools
* Replica 2, min replicas 1.
* Ceph version 17.2.0 (43e2e60a7559d3f46c9d53f
Pablo,
Since some PGs are empty and all OSDs are enabled, I'm not optimistic about
the future at all.
Was the command "ceph osd force-create-pg" executed with missing OSDs ?
Le lun. 17 juin 2024 à 17:26, cellosof...@gmail.com
a écrit :
> Hi everyone,
>
> Thanks for your kind responses
>
> I k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I understand,
We had to re-create the OSDs because of backing storage hardware failure,
so recovering from old OSDs is not possible.
From your current understanding, is there a possibility to at least recover
some of the information, at least the fragments that are not missing.
I ask this b
Hi,
6 host 16 OSD cluster here, all SATA SSDs. All Ceph daemons version
18.2.2. Host OS is Ubuntu 24.04. Intel X540 10Gb/s interfaces for
cluster network. All is fine while using a 1Gb/s switch. When moved to
10Gb/s switch (Netgear XS712T), OSDs, one-by-one start failing heartbeat
checks and
Command for trying the export was:
[rook@rook-ceph-tools-recovery-77495958d9-plfch ~]$ rados export -p
cephfs-replicated /mnt/recovery/backup-rados-cephfs-replicated
We made sure we had enough space for this operation, and mounted the
/mnt/recovery path using hostPath in the modified rook "toolbo
check mtu between nodes first, ping with mtu size to check it.
Vào Th 2, 17 thg 6, 2024 vào lúc 22:59 Sarunas Burdulis <
saru...@math.dartmouth.edu> đã viết:
> Hi,
>
> 6 host 16 OSD cluster here, all SATA SSDs. All Ceph daemons version
> 18.2.2. Host OS is Ubuntu 24.04. Intel X540 10Gb/s inter
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
1 pg / 16 is missing, in the meta pool, it is already enough to have great
difficulty browsing the FS
Your difficulty is to locate important objects in the data pool.
Try, perhaps, to target the important objects by retrieving the
layout/parent attributes on the objects in the cephfs-replicated po
>>
>> * We use replicated pools
>> * Replica 2, min replicas 1.
Note to self: Change the docs and default to discourage this. This is rarely
appropriate in production.
You had multiple overlapping drive failures?
___
ceph-users mailing list --
Perhaps Ceph itself should also have a warning pop up (in "ceph -s", "ceph
health detail" etc) when replica and min_size=1 or in an EC if min_size <
k+1. Of course it could be muted but it would give an operator pause
initially when setting that. I think a lot of people assume replica size=2
is saf
Need to update the OS Recommendations doc to represent latest supported
distros
- https://docs.ceph.com/en/latest/start/os-recommendations/#platforms
- PR from Zac to be reviewed CLT: https://github.com/ceph/ceph/pull/58092
arm64 CI check ready to be made required (request from Rongqi Sun, the arm
In Pablo's unfortunate incident, it was because of a SAN incident, so it's
possible that Replica 3 didn't save him.
In this scenario, the architecture is more the origin of the incident than
the number of replicas.
It seems to me that replica 3 exists, by default, since firefly => make
replica 2,
Ohhh, so multiple OSD failure domains on a single SAN node? I suspected as
much.
I've experienced a Ceph cluster built on SanDisk InfiniFlash, was was somewhere
between SAN and DAS arguably. Each of 4 IF chassis drive 4x OSD nodes via SAS,
but it was zoned such that the chassis was the failur
Hi all, I have configured a ceph cluster to be used as an object store with a
combination of SSDs and HDDs, where the block.db is stored on LVM on SSDs and
the OSD block is stored on HDDs.
I have set up one SSD for storing metadata (rocksDB), and five HDDs are
associated with it to store the OS
I am deploying Rook 1.10.13 with Ceph 17.2.6 on our Kubernetes clusters. We are
using the Ceph Shared Filesystem a lot and, we have never faced an issue.
Lately, we have deployed it on Oracle Linux 9 VMs (previous/existing
deployments use Centos/RHEL 7) and we are facing the next issue:
We ha
30 matches
Mail list logo