[ceph-users] Same SSD-Cache-Pool for multiple Spinning-Disks-Pools?

2016-02-03 Thread Udo Waechter
Hello everyone, I'm using ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) with debian 8 I have now implemented a SSD (2 OSDs) cache tier for one of my pool. I am now wondering whether it is possible to use the same SSD-Pool for multiple pools as a cache tier? Or do I need to creat

[ceph-users] Upgrading with mon & osd on same host

2016-02-03 Thread Udo Waechter
Hi, I would like to upgrade my ceph cluster from hammer to infernalis. I'm reading the upgrade notes, that I need to upgrade & restart the monitors first, then the OSDs. Now, my cluster has OSDs and Mons on the same hosts (I know that should not be the case, but it is :( ). I'm just wondering:

[ceph-users] Adding Cache Tier breaks rbd access

2016-02-03 Thread Udo Waechter
Hello, I am experimenting with adding a SSD-Cache tier to my existing Ceph 0.94.5 Cluster. Currently I have: 10 OSDs on 5 hosts (spinning disks). 2 OSDs on 1 host (SSDs) I have followed the cache tier docs: http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ 1st I created a new (sp

Re: [ceph-users] Adding Cache Tier breaks rbd access

2016-02-03 Thread Udo Waechter
Ah, I might have found the solution: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg26441.html Add access to the Cache-tier for libvirt. I'll try that later. Talking about it sometimes really helps ;) Thanks, udo. On 02/03/2016 04:25 PM, Udo Waechter wrote: > Hello, &

[ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-06 Thread Udo Waechter
Hello, I am experiencing totally weird filesystem corruptions with the following setup: * Ceph infernalis on Debian8 * 10 OSDs (5 hosts) with spinning disks * 4 OSDs (1 host, with SSDs) The SSDs are new in my setup and I am trying to setup a Cache tier. Now, with the spinning disks Ceph is runn

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-10 Thread Udo Waechter
Hi, On 02/09/2016 03:46 PM, Jason Dillaman wrote: > What release of Infernalis are you running? When you encounter this error, > is the partition table zeroed out or does it appear to be random corruption? > its ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) and dpkg -l ceph:

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-11 Thread Udo Waechter
On 02/10/2016 06:07 PM, Jason Dillaman wrote: > Can you provide the 'rbd info' dump from one of these corrupt images? > sure, rbd image 'ldap01.root.borked': size 2 MB in 5000 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.18394b3d1b58ba forma

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-17 Thread Udo Waechter
Hello, sorry for the delay. I was pretty busy otherwise. On 02/11/2016 03:13 PM, Jason Dillaman wrote: > Assuming the partition table is still zeroed on that image, can you run: > > # rados -p get rbd_data.18394b3d1b58ba. - | cut > -b 512 | hexdump > Here's the hexdump: 000

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-21 Thread Udo Waechter
Hi, On 02/18/2016 07:53 PM, Jason Dillaman wrote: > That's a pretty strange and seemingly non-random corruption of your first > block. Is that object in the cache pool right now? If so, is the backing > pool object just as corrupt as the cache pool's object? How do I see all that? Sorry, I'm