Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe - Profihost AG
crash is this one: 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b), process ceph-mon, pid 22172 2013-07-19 08:59:32.173975 7f484a872780 -1 mon/OSDMonitor.cc: In function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Dan van der Ster
Was that 0.61.4 -> 0.61.5? Our upgrade of all mons and osds on SL6.4 went without incident. -- dan -- Dan van der Ster CERN IT-DSS On Friday, July 19, 2013 at 9:00 AM, Stefan Priebe - Profihost AG wrote: > crash is this one: > > 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version > 0.61.

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe - Profihost AG
Complete Output / log with debug mon 20 here: http://pastebin.com/raw.php?i=HzegqkFz Stefan Am 19.07.2013 09:00, schrieb Stefan Priebe - Profihost AG: > crash is this one: > > 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version > 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b)

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe - Profihost AG
Am 19.07.2013 09:56, schrieb Dan van der Ster: > Was that 0.61.4 -> 0.61.5? Our upgrade of all mons and osds on SL6.4 > went without incident. It was from a git version in between 0.61.4 / 0.61.5 to 0.61.5. Stefan > > -- > Dan van der Ster > CERN IT-DSS > > On Friday, July 19, 2013 at 9:00 A

Re: [ceph-users] Problem executing ceph-deploy on RHEL6

2013-07-19 Thread jose.valeriooropeza
I changed the protocol to http, but I still could not make the script run. However, I found the line on install.py that sets this command (line 183): args='su -c \'rpm --import "https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/{key}.asc"\''.format(key=key), I changed it to a dummy command:

[ceph-users] ceph-deploy mon create doesn't create keyrings

2013-07-19 Thread jose.valeriooropeza
Hello, I've deployed a Ceph cluster consisting of 5 server nodes and a Ceph client that will hold the mounted CephFS. The cephclient serves as admin too, and from that node I want to deploy the 5 servers with the ceph-deploy tool. >From the admin I execute: "ceph-deploy mon create cephserver2"

Re: [ceph-users] v0.66 released

2013-07-19 Thread Erik Logtenberg
> * osd: pg log (re)writes are not vastly more efficient (faster peering) >(Sam Just) Do you really mean "are not"? I'd think "are now" would make sense (?) - Erik. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/list

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Sage Weil
On Fri, 19 Jul 2013, Stefan Priebe - Profihost AG wrote: > crash is this one: Can you post a full lost (debug mon = 20, debug paxos = 20, debug ms = 1), and/or hit us up on irc? > > 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version > 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad9

Re: [ceph-users] v0.66 released

2013-07-19 Thread Sage Weil
On Fri, 19 Jul 2013, Erik Logtenberg wrote: > > * osd: pg log (re)writes are not vastly more efficient (faster peering) > >(Sam Just) > > Do you really mean "are not"? I'd think "are now" would make sense (?) Yeah, "are now"... this got fixed in the blog post but I didn't send out another

Re: [ceph-users] ceph-deploy mon create doesn't create keyrings

2013-07-19 Thread jose.valeriooropeza
Yes, I did From: Gregory Farnum [mailto:g...@inktank.com] Sent: Friday, July 19, 2013 4:59 PM To: Valerio Oropeza José, ITS-CPT-DEV-TAD Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph-deploy mon create doesn't create keyrings Did you do "ceph-deploy new" before you started? On Frida

Re: [ceph-users] ceph-deploy mon create doesn't create keyrings

2013-07-19 Thread Gregory Farnum
> On Friday, July 19, 2013, wrote: > > Hello, > > I’ve deployed a Ceph cluster consisting of 5 server nodes and a Ceph client > that will hold the mounted CephFS. > > The cephclient serves as admin too, and from that node I want to deploy the > 5 servers with the ceph-deploy tool. > > From the admi

Re: [ceph-users] ceph & hbase:

2013-07-19 Thread ker can
On Thu, Jul 18, 2013 at 3:13 PM, ker can wrote: > > the hbase+hdfs throughput results were 38x better. > Any thoughts on what might be going on ? > > Looks like this might be a data locality issue. After loading the table, when I look at the data block map of a region's store files its spread ou

[ceph-users] Optimize Ceph cluster (kernel, osd, rbd)

2013-07-19 Thread Ta Ba Tuan
Hi everyone, I have *3 nodes (running MON and MDS)* and *6 data nodes ( 84 OSDs**)* Each data nodes has configuraions: - CPU: 24 processor * Core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz - RAM: 32GB - Disk: 14*4TB (14disks *4TB *6 data nodes= 84 OSDs) To optimize Ceph Cluster, *I adjusted

Re: [ceph-users] feature set mismatch

2013-07-19 Thread Gaylord Holder
On 07/17/2013 05:49 PM, Josh Durgin wrote: [please keep replies on the list] On 07/17/2013 04:04 AM, Gaylord Holder wrote: On 07/16/2013 09:22 PM, Josh Durgin wrote: On 07/16/2013 06:06 PM, Gaylord Holder wrote: Now whenever I try to map an RBD to a machine, mon0 complains: feature set m

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe
Hi, sorry as all my mons were down with the same error - i was in a hurry made sadly no copy of the mons and workaround by hack ;-( but i posted a log to pastebin with debug mon 20. (see last email) Stefan Am 19.07.2013 17:14, schrieb Sage Weil: On Fri, 19 Jul 2013, Stefan Priebe - Profihost

Re: [ceph-users] weird: "-23/116426 degraded (-0.020%)"

2013-07-19 Thread Gregory Farnum
Yeah, that's a known bug with the stats collection. I think I heard Sam discussing fixing it earlier today or something. Thanks for mentioning it. :) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Wed, Jul 17, 2013 at 4:53 PM, Mikaël Cluseau wrote: > Hi list, > > not a rea

Re: [ceph-users] ceph-deploy mon create doesn't create keyrings

2013-07-19 Thread Gregory Farnum
Did you do "ceph-deploy new" before you started? On Friday, July 19, 2013, wrote: > Hello, > > I’ve deployed a Ceph cluster consisting of 5 server nodes and a Ceph > client that will hold the mounted CephFS. > > The cephclient serves as admin too, and from that node I want to deploy > the 5 serv

Re: [ceph-users] Unclean PGs in active+degrared or active+remapped

2013-07-19 Thread Mike Lowe
I'm by no means an expert, but from what I understand you do need to stick to numbering from zero if you want things to work out in the long term. Is there a chance that the cluster didn't finish bringing things back up to full replication before osd's were removed? If I were moving from 0,1

[ceph-users] Unclean PGs in active+degrared or active+remapped

2013-07-19 Thread Pawel Veselov
Hi. I'm trying to understand the reason behind some of my unclean pages, after moving some OSDs around. Any help would be greatly appreciated.I'm sure we are missing something, but can't quite figure out what. [root@ip-10-16-43-12 ec2-user]# ceph health detail HEALTH_WARN 29 pgs degraded; 68 pgs

Re: [ceph-users] xfs on ceph RBD resizing

2013-07-19 Thread Jeffrey 'jf' Lim
On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim wrote: > hey folks, I was hoping to be able to use xfs on top of RBD for a > deployment of mine. And was hoping for the resize of the RBD > (expansion, actually, would be my use case) in the future to be as > simple as a "resize on the fly", follo

[ceph-users] xfs on RBD: resizing how?

2013-07-19 Thread Jeffrey 'jf' Lim
hey folks, I'm hoping to be able to use xfs on top of RBD for a deployment of mine. And was hoping for the resize of the RBD (expansion, actually, would be my use case) in the future to be as simple as a "resize on the fly", followed by an 'xfs_growfs'. I just found a recent post, though (http://l

Re: [ceph-users] optimizing recovery throughput

2013-07-19 Thread Mikaël Cluseau
HI, On 07/19/13 07:16, Dan van der Ster wrote: and that gives me something like this: 2013-07-18 21:22:56.546094 mon.0 128.142.142.156:6789/0 27984 : [INF] pgmap v112308: 9464 pgs: 8129 active+clean, 398 active+remapped+wait_backfill, 3 active+recovery_wait, 933 active+remapped+backfilling, 1 a

Re: [ceph-users] ceph & hbase:

2013-07-19 Thread Noah Watkins
On Fri, Jul 19, 2013 at 8:09 AM, ker can wrote: > > With ceph is there any way to influence the data block placement for a > single file ? AFAIK, no... But, this is an interesting twist. New files written out to HDFS, IIRC, will by default store 1 local and 2 remote copies. This is great for MapR