Thanks! Subhachandra
On Fri, Aug 10, 2018 at 6:01 PM, Subhachandra Chandra
wrote:
> The % should be based on how much of the storage you expect that pool to
> take up out of total available. 256PGs with Replication 3 will distribute
> themselves as 256 * 3 / 14 which will be about 54 per OSD. For
The % should be based on how much of the storage you expect that pool to
take up out of total available. 256PGs with Replication 3 will distribute
themselves as 256 * 3 / 14 which will be about 54 per OSD. For the smaller
pool 16 seems too low. You can go with 32 and 256 if you want lower number
o
Folks,
I used your link to calculate PGs and i did following.
Total OSD: 14
Replica: 3
Total Pools: 2 ( Images & vms) In %Data i gave 5% to images & 95% to
vms (openstack)
https://ceph.com/pgcalc/
It gave me following result
vms - 512 PG
images - 16 PG
For safe side i set vms 256 PG is t
Re-sending it, because i found my i lost membership so wanted to make
sure, my email went through
On Fri, Aug 10, 2018 at 7:07 AM, Satish Patel wrote:
> Thanks,
>
> Can you explain about %Data field in that calculation, is this total data
> usage for specific pool or total ?
>
> For example
>
> P
Thanks,
Can you explain about %Data field in that calculation, is this total data usage
for specific pool or total ?
For example
Pool-1 is small so should I use 20%
Pool-2 is bigger so I should use 80%
I'm confused there so can you give me just example how to calculate that field?
Sent fro
I have used the calculator at https://ceph.com/pgcalc/ which looks at
relative sizes of pools and makes a suggestion.
Subhachandra
On Thu, Aug 9, 2018 at 1:11 PM, Satish Patel wrote:
> Thanks Subhachandra,
>
> That is good point but how do i calculate that PG based on size?
>
> On Thu, Aug 9, 2
Given your formula, you would have 512 PGs in total. Instead of dividing that
evenly you could also do
128 PGs for pool-1 and 384 PGs for pool-2, which gives you 1/4 and 3/4 of total PGs. This might not be 100% optimal for
the pools but keeps the calculated total PGs and the 100PG/OSD target.
Thanks Subhachandra,
That is good point but how do i calculate that PG based on size?
On Thu, Aug 9, 2018 at 1:42 PM, Subhachandra Chandra
wrote:
> If pool1 is going to be much smaller than pool2, you may want more PGs in
> pool2 for better distribution of data.
>
>
>
>
> On Wed, Aug 8, 2018 at
If pool1 is going to be much smaller than pool2, you may want more PGs in
pool2 for better distribution of data.
On Wed, Aug 8, 2018 at 12:40 AM, Sébastien VIGNERON <
sebastien.vigne...@criann.fr> wrote:
> The formula seems correct for a 100 pg/OSD target.
>
>
> > Le 8 août 2018 à 04:21, Satis
The formula seems correct for a 100 pg/OSD target.
> Le 8 août 2018 à 04:21, Satish Patel a écrit :
>
> Thanks!
>
> Do you have any comments on Question: 1 ?
>
> On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON
> wrote:
>> Question 2:
>>
>> ceph osd pool set-quota max_objects|max_bytes
Thanks!
Do you have any comments on Question: 1 ?
On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON
wrote:
> Question 2:
>
> ceph osd pool set-quota max_objects|max_bytes
> set object or byte limit on pool
>
>
>> Le 7 août 2018 à
Question 2:
ceph osd pool set-quota max_objects|max_bytes
set object or byte limit on pool
> Le 7 août 2018 à 16:50, Satish Patel a écrit :
>
> Folks,
>
> I am little confused so just need clarification, I have 14 osd in my
> clu
Folks,
I am little confused so just need clarification, I have 14 osd in my
cluster and i want to create two pool (pool-1 & pool-2) how do i
device pg between two pool with replication 3
Question: 1
Is this correct formula?
14 * 100 / 3 / 2 = 233 ( power of 2 would be 256)
So should i give
13 matches
Mail list logo