Hi Patrick,
On 7/28/22 16:22, Calhoun, Patrick wrote:
> In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like
> to have "ceph orch" allocate WAL and DB on the ssd devices.
>
> I use the following service spec:
> spec:
> data_devices:
> rotational: 1
> size: '14T
Thanks for the response. Do you have any idea when this might make it into an
actual release. Not that critical as I can work around but just curious more
than anything when I might be able to try out these bucket level policies in
dev.
--
Mark Selby
Sr Linux Administrator, The Voleon Group
Thanks for taking the time out to respond - it is greatly appreciated.
I am starting to grok how this all works. I going to have to play around with
the inheritance/subset scenario from zonegroup level definitions to bucket
level definitions.
I will repost if I have any more questions
--
Ma
The branch has been deleted.
On 7/26/22 12:39, David Galloway wrote:
Hi all,
I slowly worked my way through re-targeting any lingering ceph.git PRs
(there were 300+ of them) from the master to main branch. There were a
few dozen repos I wanted to rename the master branch on and the tool I
u
On Tue, Jul 26, 2022 at 1:41 PM Peter Lieven wrote:
>
> Am 21.07.22 um 17:50 schrieb Ilya Dryomov:
> > On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote:
> >> Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
> >>> On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
> Am 24.06.22 um 16:13 schrie
This is a hotfix release fixing a regression introduced in 17.2.2. We
recommend users to update to this release. For a detailed release notes
with links & changelog please refer to the official blog entry at
https://ceph.io/en/news/blog/2022/v17-2-3-quincy-released
Notable Changes
---
Thanks, Arthur,
I think you are right about that bug looking very similar to what I've
observed. I'll try to remember to update the list once the fix is merged and
released and I get a chance to test it.
I'm hoping somebody can comment on what are ceph's current best practices for
sizing WAL/D
Oops, I send my question about why v17.2.2 was release having mgr crash bug
only to David by mistake. I quoted this conversation because it would be
useful for other Ceph users.
2022年7月30日(土) 7:33 Satoru Takeuchi :
> Hi David,
>
> 2022年7月30日(土) 6:59 David Galloway :
>
>> Hi Satoru,
>>
>> You are
Hi - I am trying to add a new hdd to each of my 3 servers, and want to use a
spare ssd partition on the servers for the db+wall. My other OSDs are set up
the same way, but I can't seem to keep CEPH from creating the OSDs on the
drives before I can actually create the OSD
I am trying to use th