this and keep it all bluestore
2. we only use the cluster for RBDs.
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbr...@medata.com| www.medata.com
the storage be divided up?
The big questions are:
* 1 pool, or multiple,and why
* many RBDs, few RBDs, or single RBD per pool? why?
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbr
Interesting.
I thought when you defined a pool, and then defined an RBD within that pool..
that any auto-replication stayed within that pool?
So what kind of "load balancing" do you mean?
I'm confused.
- Original Message -
From: "Paul Emmerich"
To: "
ssociation between pools and a theoretical preferred iscsi gateway.
- Original Message -
From: "Paul Emmerich"
To: "Philip Brown"
Cc: "ceph-users"
Sent: Thursday, December 5, 2019 8:16:09 AM
Subject: Re: [ceph-users] best pool usage for vmware backing
- Original Message -
From: "Paul Emmerich"
To: "Philip Brown"
Cc: "ceph-users"
Sent: Thursday, December 5, 2019 11:08:23 AM
Subject: Re: [ceph-users] best pool usage for vmware backing
No, you obviously don't need multiple pools for load balancing.
--
P
, and then go hand manage the slicing?
ceph-volume lvm create --data /dev/sdc --block.wal /dev/sdx1
ceph-volume lvm create --data /dev/sdd --block.wal /dev/sdx2
ceph-volume lvm create --data /dev/sde --block.wal /dev/sdx3
?
can I not get away with some other more simplified usage?
--
Philip
Interesting. What did the partitioning look like?
- Original Message -
From: "Daniel Sung"
To: "Nathan Fish"
Cc: "Philip Brown" , "ceph-users"
Sent: Tuesday, December 10, 2019 1:21:36 AM
Subject: Re: [ceph-users] sharing single SSD across multip
es: /dev/sdb
I had seen various claims here and there about how ceph would just
automatically figure things out, but I hadnt seen any real world examples.
Thank you for posting.
- Original Message -
From: "Daniel Sung"
To: "Philip Brown"
Cc: "ceph-users&q
high
performance group, and allocate certain RBDs to only use that set of disks.
Pools, only control things like the replication count, and number of placement
groups.
I'd have to set up a whole new ceph cluster for the type of behavior I want.
Am I correct?
--
Philip Brown| Sr. Li
Sounds very useful.
Any online example documentation for this?
havent found any so far?
- Original Message -
From: "Nathan Fish"
To: "Marc Roos"
Cc: "ceph-users" , "Philip Brown"
Sent: Monday, December 16, 2019 2:07:44 PM
Subject: Re: [ce
Yes I saw that thanks.
Unfortunately, that doesnt show use of "custom classes" as someone hinted at.
- Original Message -
From: dhils...@performair.com
To: "ceph-users"
Cc: "Philip Brown"
Sent: Monday, December 16, 2019 3:38:49 PM
Subject: RE: Separate
only goes as high as about 60% on a per-device basis.
CPU is idle.
Doesnt seem like network interface is capped either.
So.. how do I improve RBD throughput?
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918
The odd thing is:
the network interfaces on the gateways dont seem to be at 100% capacity
and the OSD disks dont seem to be at 100% utilization.
so I'm confused where this could be getting held up.
- Original Message -
From: "Wido den Hollander"
To: "Philip Brown&q
and should "turn that knob" up?
- Original Message -
From: "Wido den Hollander"
To: "Philip Brown" , "ceph-users"
Sent: Tuesday, January 14, 2020 12:42:48 AM
Subject: Re: [ceph-users] where does 100% RBD utilization come from?
The util is
14 matches
Mail list logo