Re: [ceph-users] SSD Recovery Settings

2019-03-21 Thread Marc Roos
-work-does-it-work/7936/2 -Original Message- From: Brent Kennedy Sent: 21 March 2019 02:21 To: 'Reed Dier' Cc: 'ceph-users' Subject: Re: [ceph-users] SSD Recovery Settings Lots of good info there, thank you! I tend to get options fatigue when trying to pick out a

Re: [ceph-users] SSD Recovery Settings

2019-03-20 Thread Brent Kennedy
Lots of good info there, thank you! I tend to get options fatigue when trying to pick out a new system. This should help narrow that focus greatly. -Brent From: Reed Dier Sent: Wednesday, March 20, 2019 12:48 PM To: Brent Kennedy Cc: ceph-users Subject: Re: [ceph-users] SSD

Re: [ceph-users] SSD Recovery Settings

2019-03-20 Thread Reed Dier
Grafana is the web frontend for creating the graphs. InfluxDB holds the time series data that Grafana pulls from. To collect data, I am using collectd daemons run

Re: [ceph-users] SSD Recovery Settings

2019-03-20 Thread Brent Kennedy
Wednesday, March 20, 2019 11:01 AM To: Brent Kennedy Cc: ceph-users Subject: Re: [ceph-users] SSD Recovery Settings Not sure what your OSD config looks like, When I was moving from Filestore to Bluestore on my SSD OSD's (and NVMe FS journal to NVMe Bluestore block.db), I had an iss

Re: [ceph-users] SSD Recovery Settings

2019-03-20 Thread Reed Dier
Not sure what your OSD config looks like, When I was moving from Filestore to Bluestore on my SSD OSD's (and NVMe FS journal to NVMe Bluestore block.db), I had an issue where the OSD was incorrectly being reported as rotational in some part of the chain. Once I overcame that, I had a huge boost

Re: [ceph-users] SSD Recovery Settings

2019-03-19 Thread Konstantin Shalygin
I setup an SSD Luminous 12.2.11 cluster and realized after data had been added that pg_num was not set properly on the default.rgw.buckets.data pool ( where all the data goes ). I adjusted the settings up, but recovery is going really slow ( like 56-110MiB/s ) ticking down at .002 per log entry(c