You were very clear.
Create one pool containing all drives.
You can deploy more than one OSD on an NVMe drive, using a fraction of the
size. Not all drives have to have the same number of OSDs.
I you deploy 2x OSDs on the 7.6TB and 1x OSDs on the 3.8TB, you will have 15
OSDs total, each 3.8T
Hi and thanks,
Maybe I was not able to express myself correctly.
I have 3 nodes, and I will be using 3 replicas for the data, which will be
VMs disks.
*Each node has** 04 disks* :
- 03 nvme disks of 3.8Tb
- and 01 nvme disk of 7.6Tb
All three nodes are equivalent.
As mentioned above, one pool
Hi Casey,
Thanks a lot for the clarification. I feel that zonegroup made a great sense at
the beginning when multisite feature was conceived and (I suspect) zones were
always syncing from all other zones within a zonegroup. However, once the
"sync_from" was introduced and later the sync policy
On Tue, 4 Jul 2023 at 10:00, Matthew Booth wrote:
>
> On Mon, 3 Jul 2023 at 18:33, Ilya Dryomov wrote:
> >
> > On Mon, Jul 3, 2023 at 6:58 PM Mark Nelson wrote:
> > >
> > >
> > > On 7/3/23 04:53, Matthew Booth wrote:
> > > > On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote:
> > > > This cont
On Tue, 4 Jul 2023 at 14:24, Matthew Booth wrote:
> On Tue, 4 Jul 2023 at 10:45, Yin, Congmin wrote:
> >
> > Hi , Matthew
> >
> > I see "rbd with pwl cache: 5210112 ns", This latency is beyond my
> > expectations and I believe it is unlikely to occur. In theory, this value
> > should be aroun
Thank you guys for the help here! We discovered the issue. We deployed the
whole system using Ubuntu, and it seems that when the TCMU-runner is
installed, some folders are not created, and as a consequence, the iSCSI
reservations do not work as they (the iSCSI reservation) write in a file
the reser
Thank you all guys that tried to help here. We discovered the issue, and it
had nothing to do with Ceph or iSCSI GW.
The issue was being caused by a Switch that was acting as the "router" for
the network of the iSCSI GW. All end clients (applications) were separated
into different VLANs, and netwo
There aren’t enough drives to split into multiple pools.
Deploy 1 OSD on each of the 3.8T devices and 2 OSDs on each of the 7.6s.
Or, alternately, 2 and 4.
> On Jul 4, 2023, at 3:44 AM, Eneko Lacunza wrote:
>
> Hi,
>
> El 3/7/23 a las 17:27, wodel youchi escribió:
>> I will be deploying a Pr
On Tue, 4 Jul 2023 at 10:45, Yin, Congmin wrote:
>
> Hi , Matthew
>
> I see "rbd with pwl cache: 5210112 ns", This latency is beyond my
> expectations and I believe it is unlikely to occur. In theory, this value
> should be around a few hundred microseconds. But I'm not sure what went wrong
>
Are there any ideas how to work with this?
We disabled the logging so we do not run our of diskspace, but the rgw
daemon still requires A LOT of cpu because of this.
Am Mi., 21. Juni 2023 um 10:45 Uhr schrieb Boris Behrens :
> I've update the dc3 site from octopus to pacific and the problem is st
Hi,
Thank you very much! That's exactly what I was looking for. I'm in no
hurry as long as it will be able to remove the data eventually.
Cheers,
Thomas
On 04.07.23 12:23, Dhairya Parmar wrote:
Hi,
These symptoms look relevant to [0] and its PR is already merged in main;
backported to quinc
Hi,
These symptoms look relevant to [0] and its PR is already merged in main;
backported to quincy but pacific and reef are pending.
[0] https://tracker.ceph.com/issues/59569
- Dhairya
On Tue, Jul 4, 2023 at 1:54 AM Thomas Widhalm
wrote:
> Hi,
>
> I had some trouble in the past with my CephF
On Mon, 3 Jul 2023 at 18:33, Ilya Dryomov wrote:
>
> On Mon, Jul 3, 2023 at 6:58 PM Mark Nelson wrote:
> >
> >
> > On 7/3/23 04:53, Matthew Booth wrote:
> > > On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote:
> > > This container runs:
> > > fio --rw=write --ioengine=sync --fdatasync
Hi,
El 3/7/23 a las 17:27, wodel youchi escribió:
I will be deploying a Proxmox HCI cluster with 3 nodes. Each node has 3
nvme disks of 3.8Tb each and a 4th nvme disk of 7.6Tb. Technically I need
one pool.
Is it good practice to use all disks to create the one pool I need, or is
it better to cr
14 matches
Mail list logo