[ceph-users] Re: Ceph RBD - High IOWait during the Writes

2020-11-12 Thread Edward kalk
. (google fio example commands) ^south bridge +raid controller to disks ops and latency. -Edward Kalk Datacenter Virtualization Performance Engineering Socket Telecom Columbia, MO, USA ek...@socket.net > On Nov 12, 2020, at 4:45 AM, athreyavc wrote: > > Jumbo frames enabled and MT

[ceph-users] Re: OSD Crash, high RAM usage

2020-08-23 Thread Edward kalk
size of hard disks (OSDs)? quantity of disks (OSDs) per server? quantity of servers? SSDs or spinners (OSDs)? quantity of pools? are all pools on all disks? quantity of PGs? PGPs? (per pool) paste of ceph.conf variables? was this a clean install, or upgrade? (previous version(s)?) -Ed > On Aug

[ceph-users] Re: SED drives , poor performance

2020-08-08 Thread Edward kalk
been reading up on the Seagate Constellation ES SED, don’t see anything saying that can be done. I plan to swap one with a spare non-SED I have next week to see if perf goes normal. -Ed > On Aug 8, 2020, at 5:56 AM, Marc Roos wrote: > >  > Maybe to obvious suggestion, but what about disablin

[ceph-users] SED drives , poor performance

2020-08-08 Thread Edward kalk
Im getting poor performance with 5 of my OSDs, Seagate Constellation ES SED (1) 10k SAS 2TB 3.5 drives. disk write latency keeps drifting high , 100ms-230ms on writes. the other 30 OSDs are performing well. avg latency 10-20ms We observe stats via “iostat -xtc 2” on CEPH server. w_await showing

[ceph-users] Re: Crush Map and CEPH meta data locations

2020-08-04 Thread Edward kalk
le > monitors - for high availability in case a monitor goes down. > > This is all explained quite well in the architecture documentation: > https://docs.ceph.com/docs/master/architecture/#cluster-map. > > Regards, > G. > >> On Mon, Aug 3, 2020, at 2:01 PM, Edward kal

[ceph-users] Crush Map and CEPH meta data locations

2020-08-03 Thread Edward kalk
The metadata that tells CEPH where all data is located, my understanding is the crush map. Where is it stored, is it redundantly distributed so as to protect from node failure? What safeguards the critical cluster metadata? -Ed ___ ceph-users mailing l