. (google fio example
commands)
^south bridge +raid controller to disks ops and latency.
-Edward Kalk
Datacenter Virtualization
Performance Engineering
Socket Telecom
Columbia, MO, USA
ek...@socket.net
> On Nov 12, 2020, at 4:45 AM, athreyavc wrote:
>
> Jumbo frames enabled and MT
size of hard disks (OSDs)?
quantity of disks (OSDs) per server?
quantity of servers?
SSDs or spinners (OSDs)?
quantity of pools?
are all pools on all disks?
quantity of PGs? PGPs? (per pool)
paste of ceph.conf variables?
was this a clean install, or upgrade? (previous version(s)?)
-Ed
> On Aug
been reading up on the Seagate Constellation ES SED, don’t see anything saying
that can be done. I plan to swap one with a spare non-SED I have next week to
see if perf goes normal.
-Ed
> On Aug 8, 2020, at 5:56 AM, Marc Roos wrote:
>
>
> Maybe to obvious suggestion, but what about disablin
Im getting poor performance with 5 of my OSDs, Seagate Constellation ES SED (1)
10k SAS 2TB 3.5 drives.
disk write latency keeps drifting high ,
100ms-230ms on writes. the other 30 OSDs are performing well. avg latency
10-20ms
We observe stats via “iostat -xtc 2” on CEPH server. w_await showing
le
> monitors - for high availability in case a monitor goes down.
>
> This is all explained quite well in the architecture documentation:
> https://docs.ceph.com/docs/master/architecture/#cluster-map.
>
> Regards,
> G.
>
>> On Mon, Aug 3, 2020, at 2:01 PM, Edward kal
The metadata that tells CEPH where all data is located, my understanding is the
crush map. Where is it stored, is it redundantly distributed so as to protect
from node failure? What safeguards the critical cluster metadata?
-Ed
___
ceph-users mailing l