Hi,
I have created a test Ceph cluster with Ceph Octopus using cephadm.
Cluster total RAW disk capacity is 262 TB but it's allowing to use of only
132TB.
I have not set quota for any of the pool. what could be the issue?
Output from :-
ceph -s
cluster:
id: f8bc7682-0d11-11eb-a332-0cc47
Can you post your crush map? Perhaps some OSDs are in the wrong place.
On Sat, Oct 24, 2020 at 8:51 AM Amudhan P wrote:
>
> Hi,
>
> I have created a test Ceph cluster with Ceph Octopus using cephadm.
>
> Cluster total RAW disk capacity is 262 TB but it's allowing to use of only
> 132TB.
> I have
Hi Nathan,
Attached crushmap output.
let me know if you find any thing odd.
On Sat, Oct 24, 2020 at 6:47 PM Nathan Fish wrote:
> Can you post your crush map? Perhaps some OSDs are in the wrong place.
>
> On Sat, Oct 24, 2020 at 8:51 AM Amudhan P wrote:
> >
> > Hi,
> >
> > I have created a tes
On 2020-10-24 14:53, Amudhan P wrote:
> Hi,
>
> I have created a test Ceph cluster with Ceph Octopus using cephadm.
>
> Cluster total RAW disk capacity is 262 TB but it's allowing to use of only
> 132TB.
> I have not set quota for any of the pool. what could be the issue?
Unbalance? What does ce
Yes, There is a unbalance in PG's assigned to OSD's.
`ceph osd df` output snip
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL%USE VAR PGS STATUS
0hdd 5.45799 1.0 5.5 TiB 3.6 TiB 3.6 TiB 9.7 MiB 4.6 GiB
1.9 TiB 65.94 1.31 13 up
1
Hi, my cluster was crashed by going down one of my DC and 'ceph -s'
status dont show me the current working status and nothing change in
large time, how can i see what is ceph doing really:
cluster:
health: HEALTH_ERR
mons fond-beagle,guided-tuna are using a lot of disk space
Eneko and all,
Regarding my current BlueFS Spillover issues, I've just noticed in
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/
that it says:
If there is only a small amount of fast storage available (e.g., less
than a gigabyte), we recommend using it as a WAL dev