Dear all
I have a ceph cluster where so far all OSDs have been rotational hdd disks
(actually there are some SSDs, used only for block.db and wal.db)
I now want to add some SSD disks to be used as OSD. My use case is:
1) for the existing pools keep using only hdd disks
2) create some new pools u
Hello,
We would like to run a RAID1 between a local storage and a RBD device. This
would allow us to sustain network failures or Ceph failures and also give
better read performance as we would set it up with write-mostly on RBD in mdadm.
Basically we would like to implement
https://discord.com
The overall progress on this release is looking much better and if we
can approve it we can plan to publish it early next week.
Still seeking approvals
rados - Neha, Laura
rook - Sébastien Han
cephadm - Adam
dashboard - Ernesto
rgw - Casey
rbd - Ilya (full rbd run in progress now)
krbd - Ilya
fs
On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Ern
cephadm approved. Known failures.
On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote:
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
CCing Nizam as Dashboard lead for review & approval.
Kind Regards,
Ernesto
On Fri, Jan 20, 2023 at 6:42 PM Adam King wrote:
> cephadm approved. Known failures.
>
> On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein
> wrote:
>
>> The overall progress on this release is looking much better and if
Hello,
Thank you for the valuable info, and especially for the slack link (it's not
listed on the community page)
The ceph-volume command was issued in the following manner :
login to my 1st vps from which I performed the boostrap with cephadm
exec
> sudo cephadm shell
which gets me root shell i
From my end, rados looks good. All failures are known. Leaving final
approval to Neha.
On Fri, Jan 20, 2023 at 12:03 PM Ernesto Puerta wrote:
> CCing Nizam as Dashboard lead for review & approval.
>
> Kind Regards,
> Ernesto
>
>
> On Fri, Jan 20, 2023 at 6:42 PM Adam King wrote:
>
>> cephadm ap
On Fri, Jan 20, 2023 at 12:36 PM Laura Flores wrote:
> From my end, rados looks good. All failures are known. Leaving final
> approval to Neha.
>
> On Fri, Jan 20, 2023 at 12:03 PM Ernesto Puerta
> wrote:
>
>> CCing Nizam as Dashboard lead for review & approval.
>>
>> Kind Regards,
>> Ernesto
>>
Hello,
On Mon, Jan 16, 2023 at 11:04 AM thanh son le wrote:
>
> Hi,
>
> I have been studying the document from Ceph and Rados but I could not find
> any metrics to measure the number of read/write operations for each file. I
> understand that Cephfs is the front-end, the file is going to be store
Dashboard lgtm!
Regards,
Nizam
On Fri, Jan 20, 2023, 22:09 Yuri Weinstein wrote:
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cepha
11 matches
Mail list logo