Hi All,
Pretend I'm about to upgrade from one Ceph release to another. I want
to know that the cluster is healthy enough to sanely upgrade (MONs
quorate, no OSDs actually on fire), but don't care about HEALTH_WARN
issues like "too many PGs per OSD" or "crush map has legacy tunables".
In this cas
Hello Shinobu,
We already raised ticket for this issue. FYI -
http://tracker.ceph.com/issues/18924
Thanks
Jayaram
On Mon, Feb 20, 2017 at 12:36 AM, Shinobu Kinjo wrote:
> Please open ticket at http://tracker.ceph.com, if you haven't yet.
>
> On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah
Hello,
On Thu, 16 Feb 2017 17:51:18 +0200 Kostis Fardelas wrote:
> Hello,
> we are on Debian Jessie and Hammer 0.94.9 and recently we decided to
> upgrade our kernel from 3.16 to 4.9 (jessie-backports). We experience
> the same regression but with some shiny points
Same OS, kernels and Ceph ver
BTW, we used hammer version with the following fix. the issue is also
reported by us during the former backup testing.
https://github.com/ceph/ceph/pull/12218/files
librbd: diffs to clone's first snapshot should include parent diffs
Zhongyan
On Mon, Feb 20, 2017 at 11:13 AM, Zhongyan Gu wrote:
Hi Sage and Jason,
My company is building backup system based on rbd export-diff and
import-diff cmds.
However, in recent test we found some strange behaviors of cmd export-diff.
long words in short: sometimes repeatedly executing rbd export-diff
–from-snap snap1 image@snap2 -|md5sum, and md5sum
On Sat, Feb 18, 2017 at 2:55 PM, Noah Watkins wrote:
> The least intrusive solution is to simply change the sandbox to allow
> the standard file system module loading function as expected. Then any
> user would need to make sure that every OSD had consistent versions of
> dependencies installed us
Please open ticket at http://tracker.ceph.com, if you haven't yet.
On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah
wrote:
> Hi Wido,
>
> Thanks for the information and let us know if this is a bug.
> As workaround we will go with small bluestore_cache_size to 100MB.
>
> Thanks,
> Muthu
>
> On
> Op 18 februari 2017 om 17:03 schreef rick stehno :
>
>
> I work for Seagate and have done over a hundred of tests using SMR 8TB disks
> in a cluster. It all depends on what your access is if SMR hdd would be the
> best choice. Remember SMR hdd don't perform well doing random writes, but are