[ceph-users] Ceph 0.94.8 Hammer released

2016-08-26 Thread Sage Weil
This Hammer point release fixes several bugs. We recommend that all hammer v0.94.x users upgrade. For the changelog, please see http://docs.ceph.com/docs/master/release-notes/#v0-94-8-hammer Getting Ceph * Git at git://github.com/ceph/ceph.git * Tarball at http://download.

Re: [ceph-users] Storcium has been certified by VMWare

2016-08-26 Thread Nick Fisk
Well done Alex, I know the challenges you have worked through to attain this. > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex > Gorbachev > Sent: 26 August 2016 15:53 > To: scst-de...@lists.sourceforge.net; ceph-users > Subject: [ceph-

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-08-26 Thread Antoine Mahul
Hi, We have the same issue on CentOS 7.2.1511 and Ceph 10.2.2 : sometimes ceph-disk@ services are not started and OSD daemons are failed. With udev in debug mode, we observe that udev triggers are fired but failed because /var (on LVM) is not ready. In ceph-disk, the setup_statedir function is ca

[ceph-users] Storcium has been certified by VMWare

2016-08-26 Thread Alex Gorbachev
I wanted to share that we have passed testing and received VMWare HCL certification for the ISS STORCIUM solution using Ceph Hammer as back end and SCST with Pacemaker as iSCSI delivery HA gateway. Thank you for all of your hard and continuous work on these projects. We will make sure that we cont

Re: [ceph-users] Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

2016-08-26 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 5:17 PM, Ivan Grcic wrote: > Hi Ilya, > > there you go, and thank you for your time. > > BTW should one get a crushmap from osdmap doing something like this: > > osdmaptool --export-crush /tmp/crushmap /tmp/osdmap > crushtool -c crushmap -o crushmap.3518 Yes. You can also

Re: [ceph-users] mounting a VM rbd image as a /dev/rbd0 device

2016-08-26 Thread Jason Dillaman
If there is a partition table on the device, you need to get Linux to scan the partition table and build the sub-devices. Try running "kpartx -a /dev/rbd0" to create the devices. Since you have LVM on the second partition, ensure that it is configured to not filter out the new partition device and

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-26 Thread Mehmet
Hello JC, as promised here is my - ceph.conf (I have done a "diff" on all involved server - all using the same ceph.conf) = ceph_conf.txt - ceph pg 0.223 query = ceph_pg_0223_query_20161236.txt - ceph -s = ceph_s.txt - ceph df = ceph_df.txt - ceph osd df = ceph_osd_df.txt - ceph osd dump | gre

Re: [ceph-users] Antw: Re: Best practices for extending a ceph cluster with minimal client impact data movement

2016-08-26 Thread Wido den Hollander
> Op 25 augustus 2016 om 12:14 schreef Steffen Weißgerber > : > > > > > > Hi, > > > >>> Wido den Hollander schrieb am Dienstag, 9. August 2016 um > 10:05: > > >> Op 8 augustus 2016 om 16:45 schreef Martin Palma : > >> > >> > >> Hi all, > >> > >> we are in the process of expanding our

Re: [ceph-users] mounting a VM rbd image as a /dev/rbd0 device

2016-08-26 Thread Wido den Hollander
> Op 25 augustus 2016 om 19:31 schreef "Deneau, Tom" : > > > If I have an rbd image that is being used by a VM and I want to mount it > as a read-only /dev/rbd0 kernel device, is that possible? > > When I try it I get: > > mount: /dev/rbd0 is write-protected, mounting read-only > mount: wrong

Re: [ceph-users] Vote for OpenStack Talks!

2016-08-26 Thread M Ranga Swami Reddy
Thank you very much for voting... My Presentation has been accepted for inclusion in the OpenStack Summit in Barcelona ( 25th Oct 2016 @ 05:05 - 05:45pm) Thanks Swami On Sun, Jul 31, 2016 at 1:36 PM, M Ranga Swami Reddy wrote: > Please vote for my presentation (search "swami reddy") > https://w