Re: [ceph-users] Replacing a defective OSD

2016-09-07 Thread Ronny Aasen
On 07. sep. 2016 02:51, Vlad Blando wrote: Hi, I replaced a failed OSD and was trying to add it back to the pool, my problem is that I am not detecting the physical disk. It looks like I need to initialize it via the hardware raid before I can see it on the OS. If I'm going to restart the said

Re: [ceph-users] Raw data size used seems incorrect (version Jewel, 10.2.2)

2016-09-07 Thread David
Could be related to this? http://tracker.ceph.com/issues/13844 On Wed, Sep 7, 2016 at 7:40 AM, james wrote: > Hi, > > Not sure if anyone can help clarify or provide any suggestion on how to > troubleshoot this > > We have a ceph cluster recently build up with ceph version Jewel, 10.2.2. > Based

[ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-07 Thread Mehmet
Hello ceph people, yesterday i stopped one of my OSDs via root@:~# systemctl stop ceph-osd@10 and tried to flush the journal for this osd via root@:~# ceph-osd -i 10 --flush-journal but getting this output on the screen: SG_IO: questionable sense data, results may be incorrect SG_IO: questio

[ceph-users] experiences in upgrading Infernalis to Jewel

2016-09-07 Thread felderm
Hi All We are preparing upgrade from Ceph Infernalis 9.2.0 to Ceph Jewel 10.2.2. Based on the Upgrade procedure documentation http://docs.ceph.com/docs/jewel/install/upgrading-ceph/ it sound easy. But it often fails when you think it's easy. Therefore I would like to know your opinion for the fol

Re: [ceph-users] experiences in upgrading Infernalis to Jewel

2016-09-07 Thread Alexandre DERUMIER
Hi, I think it's more simple to 1) change the repository 2) apt-get dist-upgrade 3) restart mon on each node 4) restart osd on each node done I have upgrade 4 cluster like this without any problem I never have used ceph-deploy for upgrade. - Mail original - De: "felderm" À: "ceph-

Re: [ceph-users] 2 osd failures

2016-09-07 Thread Shain Miley
Well not entirely too late I guess :-( I woke up this morning to see that two OTHER osd's had been marked down and out. I again restarted the osd daemons and things seem to be ok at this point. I agree that I need to get to the bottom on why this happened. I have uploaded the log files from

Re: [ceph-users] radosgw error in its log rgw_bucket_sync_user_stats()

2016-09-07 Thread Arvydas Opulskis
Hi, just in case if someone experience same problem: the only thing that helped, was restart of gateway. Only when I restarted it, I was able to create that bucket without "access denied" error on other operations. Seems, RGW had some old data cached in it. Arvydas On Tue, Sep 6, 2016 at 6:10 PM

[ceph-users] NFS gateway

2016-09-07 Thread jan hugo prins
Hi, One of the use-cases I'm currently testing is the possibility to replace a NFS storage cluster using a Ceph cluster. The idea I have is to use a server as an intermediate gateway. On the client side it will expose a NFS share and on the Ceph side it will mount the CephFS using mount.ceph. The

[ceph-users] PGs lost from cephfs data pool, how to determine which files to restore from backup?

2016-09-07 Thread Michael Sudnick
I've had to force recreate some PGs on my cephfs data pool due to some cascading disk failures in my homelab cluster. Is there a way to easily determine which files I need to restore from backup? My metadata pool is completely intact. Thanks for any help and suggestions. Sincerely, Michael

Re: [ceph-users] NFS gateway

2016-09-07 Thread Sean Redmond
Have you seen this : https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote: > Hi, > > One of the use-cases I'm currently testing is the possibility to replace > a NFS storage cluster using a Ceph cluster. > > The idea I have is to

[ceph-users] configuring cluster handle in python rados exits with error NoneType is not callable

2016-09-07 Thread Martin Hoffmann
I want to access ceph cluster rbd images via python interface. In a standalone simple python script this works without problems. However i want to create a plugin for bareos backup where this does not work and cluster configure exits with error: cluster = rados.Rados(rados_id="admin",clustername="

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-07 Thread Mehmet
Hey again, now i have stopped my osd.12 via root@:~# systemctl stop ceph-osd@12 and when i am flush the journal... root@:~# ceph-osd -i 12 --flush-journal SG_IO: questionable sense data, results may be incorrect SG_IO: questionable sense data, results may be incorrect *** Caught signal (Segmen

Re: [ceph-users] Changing Replication count

2016-09-07 Thread Vlad Blando
Thanks /vlad On Wed, Sep 7, 2016 at 9:47 AM, LOPEZ Jean-Charles wrote: > Hi, > > the stray replicas will be automatically removed in the background. > > JC > > On Sep 6, 2016, at 17:58, Vlad Blando wrote: > > Sorry bout that > > It's all set now, i thought that was replica count as it is also

Re: [ceph-users] Is rados_write_op_* any more efficient than issuing the commands individually?

2016-09-07 Thread Josh Durgin
On 09/06/2016 10:16 PM, Dan Jakubiec wrote: Hello, I need to issue the following commands on millions of objects: rados_write_full(oid1, ...) rados_setxattr(oid1, "attr1", ...) rados_setxattr(oid1, "attr2", ...) Would it make it any faster if I combined all 3 of these into a single

Re: [ceph-users] NFS gateway

2016-09-07 Thread David
I have clients accessing CephFS over nfs (kernel nfs). I was seeing slow writes with sync exports. I haven't had a chance to investigate and in the meantime I'm exporting with async (not recommended, but acceptable in my environment). I've been meaning to test out Ganesha for a while now @Sean, h

Re: [ceph-users] NFS gateway

2016-09-07 Thread John Spray
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote: > Hi, > > One of the use-cases I'm currently testing is the possibility to replace > a NFS storage cluster using a Ceph cluster. > > The idea I have is to use a server as an intermediate gateway. On the > client side it will expose a NFS share

Re: [ceph-users] RFQ for Flowjo

2016-09-07 Thread Mike Jacobacci
Hey Henry, What a sec… That R101 is for our Nagios node, not apart of the Ceph monitoring nodes. So both the R133 and the one R101 should have redundant power supplies. Make sense? Cheers Mike > On Sep 7, 2016, at 10:55 AM, Henry Figueroa > wrote: > > Mike > The monitoring node (R101.v6) do

[ceph-users] OpenStack Barcelona discount code

2016-09-07 Thread Patrick McGarry
Hey cephers, For those who are attending OpenStack Summit in Barcelona this October and have not yer purchased your ticket I wanted to share a 20% discount code that has just been provided to Red Hat that we can freely share. The code you need to enter is: RED_OPENSTACKSUMMIT This 20% discount c

Re: [ceph-users] OpenStack Barcelona discount code

2016-09-07 Thread Patrick McGarry
And since my url pasting failed on the Eventbrite link, here is the registration link: https://openstacksummit2016barcelona.eventbrite.com/ On Wed, Sep 7, 2016 at 2:30 PM, Patrick McGarry wrote: > Hey cephers, > > For those who are attending OpenStack Summit in Barcelona this October > and hav

[ceph-users] Ceph Developer Monthly

2016-09-07 Thread Patrick McGarry
Hey cephers, Tonight the CDM meeting is an APAC-friendly time slot (9p EDT), so please drop a quick 1-liner and pad link if you have something to discuss. Thanks! http://tracker.ceph.com/projects/ceph/wiki/CDM_07-SEP-2016 -- Best Regards, Patrick McGarry Director Ceph Community || Red Hat h

Re: [ceph-users] PGs lost from cephfs data pool, how to determine which files to restore from backup?

2016-09-07 Thread Gregory Farnum
On Wed, Sep 7, 2016 at 7:44 AM, Michael Sudnick wrote: > I've had to force recreate some PGs on my cephfs data pool due to some > cascading disk failures in my homelab cluster. Is there a way to easily > determine which files I need to restore from backup? My metadata pool is > completely intact.

Re: [ceph-users] 2 osd failures

2016-09-07 Thread Christian Balzer
Hello, On Wed, 7 Sep 2016 08:38:24 -0400 Shain Miley wrote: > Well not entirely too late I guess :-( > Then re-read my initial reply and see if you can find something in other logs (syslog/kernel) to explain this. As well as if those OSDs are all on the same node, maybe have missed their upgrade

Re: [ceph-users] PGs lost from cephfs data pool, how to determine which files to restore from backup?

2016-09-07 Thread Goncalo Borges
Hi Greg... I've had to force recreate some PGs on my cephfs data pool due to some cascading disk failures in my homelab cluster. Is there a way to easily determine which files I need to restore from backup? My metadata pool is completely intact. Assuming you're on Jewel, run a recursive "scru