Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-02-19 Thread Wido den Hollander
> Op 18 februari 2017 om 17:03 schreef rick stehno : > > > I work for Seagate and have done over a hundred of tests using SMR 8TB disks > in a cluster. It all depends on what your access is if SMR hdd would be the > best choice. Remember SMR hdd don't perform well doing random writes, but are

Re: [ceph-users] kraken-bluestore 11.2.0 memory leak issue

2017-02-19 Thread Shinobu Kinjo
Please open ticket at http://tracker.ceph.com, if you haven't yet. On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah wrote: > Hi Wido, > > Thanks for the information and let us know if this is a bug. > As workaround we will go with small bluestore_cache_size to 100MB. > > Thanks, > Muthu > > On

Re: [ceph-users] Passing LUA script via python rados execute

2017-02-19 Thread Patrick Donnelly
On Sat, Feb 18, 2017 at 2:55 PM, Noah Watkins wrote: > The least intrusive solution is to simply change the sandbox to allow > the standard file system module loading function as expected. Then any > user would need to make sure that every OSD had consistent versions of > dependencies installed us

[ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-19 Thread Zhongyan Gu
Hi Sage and Jason, My company is building backup system based on rbd export-diff and import-diff cmds. However, in recent test we found some strange behaviors of cmd export-diff. long words in short: sometimes repeatedly executing rbd export-diff –from-snap snap1 image@snap2 -|md5sum, and md5sum

Re: [ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-19 Thread Zhongyan Gu
BTW, we used hammer version with the following fix. the issue is also reported by us during the former backup testing. https://github.com/ceph/ceph/pull/12218/files librbd: diffs to clone's first snapshot should include parent diffs Zhongyan On Mon, Feb 20, 2017 at 11:13 AM, Zhongyan Gu wrote:

Re: [ceph-users] Jewel + kernel 4.4 Massive performance regression (-50%)

2017-02-19 Thread Christian Balzer
Hello, On Thu, 16 Feb 2017 17:51:18 +0200 Kostis Fardelas wrote: > Hello, > we are on Debian Jessie and Hammer 0.94.9 and recently we decided to > upgrade our kernel from 3.16 to 4.9 (jessie-backports). We experience > the same regression but with some shiny points Same OS, kernels and Ceph ver

Re: [ceph-users] kraken-bluestore 11.2.0 memory leak issue

2017-02-19 Thread Jay Linux
Hello Shinobu, We already raised ticket for this issue. FYI - http://tracker.ceph.com/issues/18924 Thanks Jayaram On Mon, Feb 20, 2017 at 12:36 AM, Shinobu Kinjo wrote: > Please open ticket at http://tracker.ceph.com, if you haven't yet. > > On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah

[ceph-users] `ceph health` == HEALTH_GOOD_ENOUGH?

2017-02-19 Thread Tim Serong
Hi All, Pretend I'm about to upgrade from one Ceph release to another. I want to know that the cluster is healthy enough to sanely upgrade (MONs quorate, no OSDs actually on fire), but don't care about HEALTH_WARN issues like "too many PGs per OSD" or "crush map has legacy tunables". In this cas