[ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-06 Thread Udo Waechter
Hello, I am experiencing totally weird filesystem corruptions with the following setup: * Ceph infernalis on Debian8 * 10 OSDs (5 hosts) with spinning disks * 4 OSDs (1 host, with SSDs) The SSDs are new in my setup and I am trying to setup a Cache tier. Now, with the spinning disks Ceph is runn

Re: [ceph-users] Ceph and hadoop (fstab insted of CephFS)

2016-02-06 Thread Zoltan Arnold Nagy
Hi, Are these bare metal nodes or VMs? For VMs I suggest you just attach rbd data disks then let hdfs do it’s magic. Just make sure you’re not replicating 9x (3x on ceph + 3x on hadoop). If it’s VMs, you can just do the same with krbd, just make sure to run a recent enough kernel :-) Basicall

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-06 Thread Alexandre DERUMIER
>>Could it be that rbd caching + qemu writeback cache + ceph cach tier >>writeback are not playing well together? rbd caching=true is the same than qemu writeback. Setting cache=writeback in qemu, configure the librbd with rbd cache=true if you have fs corruption, it seem that flush from gues

Re: [ceph-users] Ceph and hadoop (fstab insted of CephFS)

2016-02-06 Thread Zoltan Arnold Nagy
Hi, Please keep the list on CC as I guess others might be interested as well, if you don’t mind. For VMs one can use rbd backed block devices and for bare-metal nodes where there is no abstraction one can use krbd - notice the k there, it stands for “kernel”. krdb is the in-kernel driver as th

[ceph-users] CEPH health issues

2016-02-06 Thread Jeffrey McDonald
Hi, I'm seeing lots of issues with my CEPH installation.The health of the system is degraded and many of the OSD are down. # ceph -v ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) # ceph health HEALTH_ERR 2002 pgs degraded; 14 pgs down; 180 pgs inconsistent; 14 pgs peering

Re: [ceph-users] Ceph mirrors wanted!

2016-02-06 Thread Tyler Bishop
Covered except that the dreamhost mirror is constantly down or broken. I can add ceph.mirror.beyondhosting.net for it. Tyler Bishop Chief Technical Officer 513-299-7108 x10 tyler.bis...@beyondhosting.net If you are not the intended recipient of this transmission you are notified that di

[ceph-users] can't get rid of stale+active+clean pgs by no means

2016-02-06 Thread Nikola Ciprich
Hi, I'm still strugling with health problems of my cluster.. I still have 2 stale+active+clean and one creating pgs.. I've just stopped all nodes and started them all again, and those pages still remain.. I think I've read all related discussions and docs, and tried virtually everything I thoug

Re: [ceph-users] CEPH health issues

2016-02-06 Thread Tyler Bishop
You need to get your OSD back online. From: "Jeffrey McDonald" To: ceph-users@lists.ceph.com Sent: Saturday, February 6, 2016 8:18:06 AM Subject: [ceph-users] CEPH health issues Hi, I'm seeing lots of issues with my CEPH installation. The health of the system is degraded and many of th