On 9 October 2017 at 19:21, Jake Grimmett wrote:
> HEALTH_WARN 9 clients failing to advance oldest client/flush tid;
> 1 MDSs report slow requests; 1 MDSs behind on trimming
On a proof-of-concept 12.2.1 cluster (few random files added, 30 OSDs,
default Ceph settings) I can get the above error by
On Mon, Oct 9, 2017 at 5:52 PM, Jake Grimmett wrote:
> Hi John,
>
> Many thanks for getting back to me.
>
> Yes, I did see the "experimental" label on snapshots...
>
> After reading other posts, I got the impression that cephfs snapshots
> might be OK; provided you used a single active MDS and the
Hi John,
Many thanks for getting back to me.
Yes, I did see the "experimental" label on snapshots...
After reading other posts, I got the impression that cephfs snapshots
might be OK; provided you used a single active MDS and the latest ceph
fuse client, both of which we have.
Anyhow as you pre
On Mon, Oct 9, 2017 at 9:21 AM, Jake Grimmett wrote:
> Dear All,
>
> We have a new cluster based on v12.2.1
>
> After three days of copying 300TB data into cephfs,
> we have started getting the following Health errors:
>
> # ceph health
> HEALTH_WARN 9 clients failing to advance oldest client/flus