you.
Shain
From: Anthony D'Atri
Date: Friday, October 18, 2024 at 9:01 AM
To: Shain Miley
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Influencing the osd.id when creating or replacing an
osd
!---|
External Email
Hello,
I am still using ceph-deploy to add osd’s to my cluster. From what I have read
ceph-deploy does not allow you to specify the osd.id when creating new osds,
however I am wondering if there is a way to influence the number that ceph will
assign for the next osd that is created.
I know tha
de to test cephfs again in the future.
Thanks again for all your help,
Shain
From: Anthony D'Atri
Date: Sunday, October 13, 2024 at 12:59 PM
To: Shain Miley
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Reduced data availability: 3 pgs inact
: Anthony D'Atri
Date: Sunday, October 13, 2024 at 11:29 AM
To: Shain Miley
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Reduced data availability: 3 pgs inactive, 3 pgs down
!---|
External Email - Use Ca
Hello,
I am seeing the following information after reviewing ‘ceph health detail’:
[WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive, 3 pgs down
pg 0.1a is down, acting [234,35]
pg 0.20 is down, acting [226,267]
pg 0.2f is down, acting [227,161]
When I query each o
nd so they are still reporting as version 12.2.13 because they were not up
during either of the upgrades.
Thank you.
Shain
On 7/29/21, 6:43 PM, "Shain Miley" wrote:
Hello,
I recently upgraded our Luminous ceph cluster to Nautilus. Everything
seemed to go well.
Today I st
this upgrade (48 of the 222 osd’s have been upgraded) and I would like to
continue with the upgrade but do not want to proceed if there is a larger issue
of some sort.
Some of the hosts are showing the correct version (5.2.13) in the Dashboard and
I am not sure why the dashboard would be d
efense.com/v3/__http://www.PerformAir.com__;!!Iwwt!FAQkxiDS80ZWksiJket210Oc_wLsRih_-WqhguEb44tq0_Ao7aqrgeIO_C8$
-Original Message-----
From: Shain Miley [mailto:smi...@npr.org]
Sent: Friday, July 23, 2021 10:48 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Luminous won't fully rec
active+undersized+degraded, acting [215,201]
--
Thanks,
Shain
Shain Miley | Director of Platform and Infrastructure | Digital Media |
smi...@npr.org
or the osd
assignment.
I understand what you mean about not focusing on the osd ids...but my ocd is
making me ask the question.
Thanks,
Shain
On 9/11/20, 9:45 AM, "George Shuklin" wrote:
On 11/09/2020 16:11, Shain Miley wrote:
> Hello,
> I have been wondering for
assignment.
I am currently using ceph-deploy to handle adding nodes to the cluster.
Thanks in advance,
Shain
Shain Miley | Director of Platform and Infrastructure | Digital Media |
smi...@npr.org
___
ceph-users mailing list -- ceph-users@ceph.io
To
bluestore and that this is really nothing
to worry about.
Thanks,
Shain
On 9/9/20, 11:16 AM, "Shain Miley" wrote:
Hi,
I recently added 3 new servers to Ceph cluster. These servers use the
H740p mini raid card and I had to install the HWE kernel in Ubuntu 16.04 in
order
Is this normal for deployments going forward…or did something go wrong? These
are 12TB drives but they are showing up as 47G here instead.
We are using ceph version 12.2.13 and I installed this using ceph-deply version
2.0.1.
Thanks in advance,
Shain
Shain Miley | Director of Platform and
20 at 6:21 PM Shain Miley wrote:
>
> Hi,
> A few weeks ago several of our rdb images became unresponsive after a few
of our OSDs reached a near full state.
>
> Another member of the team rebooted the server that the rbd images are
mounted on in an attempt to re
1:47:06 rbd1 kernel: [2159048.204440] R10: c0ed0c00 R11:
0206 R12: 02424230
Aug 31 11:47:06 rbd1 kernel: [2159048.204441] R13: 02424210 R14:
R15: 0003
Any suggestions on what I can/should do next?
Thanks in advance,
Shain
Shain Miley | Direc
Hi,
We are thinking about upgrading our cluster currently running ceph version
12.2.12. I am wondering if we should be looking at upgrading to the latest
version of Mimic or the latest version Nautilus.
Can anyone here please provide a suggestion…I continue to be a little bit
confused about th
something that is flexible
enough for our environment going forward.
Thanks in advance,
Shain
--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smi...@npr.org |
202.513.3649
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
17 matches
Mail list logo