The formula seems correct for a 100 pg/OSD target.
> Le 8 août 2018 à 04:21, Satish Patel a écrit :
>
> Thanks!
>
> Do you have any comments on Question: 1 ?
>
> On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON
> wrote:
>> Question 2:
>>
>> ceph
Question 2:
ceph osd pool set-quota max_objects|max_bytes
set object or byte limit on pool
> Le 7 août 2018 à 16:50, Satish Patel a écrit :
>
> Folks,
>
> I am little confused so just need clarification, I have 14 osd in my
> clu
Hi,
> Le 21 juil. 2018 à 11:52, Marc Roos a écrit :
>
>
>
> 1. Why is ceph df not always showing 'units' G M k
Ceph default plain output show human readable values.
>
> [@c01 ~]# ceph df
> GLOBAL:
>SIZE AVAIL RAW USED %RAW USED
>81448G 31922G 49526G 60
Correct, sorry, I have just read the first question and answered too quickly.
As fas as I know the space available is "shared" (the space is a combination of
OSD drives and crushmap ) between pools using the same device class but you can
define quota for each pool if needed.
ceph osd pool set-qu
# for a specific pool:
ceph osd pool get your_pool_name size
> Le 20 juil. 2018 à 10:32, Sébastien VIGNERON a
> écrit :
>
> #for all pools:
> ceph osd pool ls detail
>
>
>> Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit :
>>
>> Hi,
>>
>&
#for all pools:
ceph osd pool ls detail
> Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit :
>
> Hi,
>
> How can I see the size of a pool? When I create a new empty pool I can see
> the capacity of the pool using 'ceph df', but as I start putting data in
> the pool the capacity is decreasing.
>
Hello,
What is your expected workload? VMs, primary storage, backup, objects storage,
...?
How many disks do you plan to put in each OSD node?
How many CPU cores? How many RAM per nodes?
Ceph access protocol(s): CephFS, RBD or objects?
How do you plan to give access to the storage to you client?
Hi,
Did you look the OpenAttic project?
Cordialement / Best regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
http://www.cria
is considered stable and the
ceph-deploy tool is changed.
I think it may be a Kernel version consideration: not all distro have the
needed minimum version of the kernel (and features) for a full use of luminous.
Cordialement / Best regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Techno
Your performance hit can be from here. When OSD daemons tries to send a big
frame, MTU misconfiguration blocks them and they must send them again with a
lower size.
On some switches, you have to set the global and the per-interface MTU sizes.
Cordialement / Best regards,
Sébastien VIGNERON
As a jumbo frame test, can you try the following?
ping -M do -s 8972 -c 4 IP_of_other_node_within_cluster_network
If you have « ping: sendto: Message too long », jumbo frames are not activated.
Cordialement / Best regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du
Hi,
MTU size? Did you ran an iperf test to see raw bandwidth?
Cordialement / Best regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
% or above).
I saw some messages on the list about the fstrim tool which can help reclaim
unused free space, but i don’t know if it’s apply to your case.
Cordialement / Best regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Université
regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr
> L
and a recovery in process. Does your OSDs
showed some rebalance of your datas? Does your OSDs use percentage change over
time? (changes in "ceph osd df")
Cordialement / Best regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Un
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
16 matches
Mail list logo