crash is this one:
2013-07-19 08:59:32.137646 7f484a872780 0 ceph version
0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b), process
ceph-mon, pid 22172
2013-07-19 08:59:32.173975 7f484a872780 -1 mon/OSDMonitor.cc: In
function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread
Was that 0.61.4 -> 0.61.5? Our upgrade of all mons and osds on SL6.4 went
without incident.
--
dan
--
Dan van der Ster
CERN IT-DSS
On Friday, July 19, 2013 at 9:00 AM, Stefan Priebe - Profihost AG wrote:
> crash is this one:
>
> 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version
> 0.61.
Complete Output / log with debug mon 20 here:
http://pastebin.com/raw.php?i=HzegqkFz
Stefan
Am 19.07.2013 09:00, schrieb Stefan Priebe - Profihost AG:
> crash is this one:
>
> 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version
> 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b)
Am 19.07.2013 09:56, schrieb Dan van der Ster:
> Was that 0.61.4 -> 0.61.5? Our upgrade of all mons and osds on SL6.4
> went without incident.
It was from a git version in between 0.61.4 / 0.61.5 to 0.61.5.
Stefan
>
> --
> Dan van der Ster
> CERN IT-DSS
>
> On Friday, July 19, 2013 at 9:00 A
I changed the protocol to http, but I still could not make the script run.
However, I found the line on install.py that sets this command (line 183):
args='su -c \'rpm --import
"https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/{key}.asc"\''.format(key=key),
I changed it to a dummy command:
Hello,
I've deployed a Ceph cluster consisting of 5 server nodes and a Ceph client
that will hold the mounted CephFS.
The cephclient serves as admin too, and from that node I want to deploy the 5
servers with the ceph-deploy tool.
>From the admin I execute: "ceph-deploy mon create cephserver2"
> * osd: pg log (re)writes are not vastly more efficient (faster peering)
>(Sam Just)
Do you really mean "are not"? I'd think "are now" would make sense (?)
- Erik.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/list
On Fri, 19 Jul 2013, Stefan Priebe - Profihost AG wrote:
> crash is this one:
Can you post a full lost (debug mon = 20, debug paxos = 20, debug ms = 1),
and/or hit us up on irc?
>
> 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version
> 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad9
On Fri, 19 Jul 2013, Erik Logtenberg wrote:
> > * osd: pg log (re)writes are not vastly more efficient (faster peering)
> >(Sam Just)
>
> Do you really mean "are not"? I'd think "are now" would make sense (?)
Yeah, "are now"... this got fixed in the blog post but I didn't send out
another
Yes, I did
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Friday, July 19, 2013 4:59 PM
To: Valerio Oropeza José, ITS-CPT-DEV-TAD
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy mon create doesn't create keyrings
Did you do "ceph-deploy new" before you started?
On Frida
> On Friday, July 19, 2013, wrote:
>
> Hello,
>
> I’ve deployed a Ceph cluster consisting of 5 server nodes and a Ceph client
> that will hold the mounted CephFS.
>
> The cephclient serves as admin too, and from that node I want to deploy the
> 5 servers with the ceph-deploy tool.
>
> From the admi
On Thu, Jul 18, 2013 at 3:13 PM, ker can wrote:
>
> the hbase+hdfs throughput results were 38x better.
> Any thoughts on what might be going on ?
>
>
Looks like this might be a data locality issue. After loading the table,
when I look at the data block map of a region's store files its spread ou
Hi everyone,
I have *3 nodes (running MON and MDS)*
and *6 data nodes ( 84 OSDs**)*
Each data nodes has configuraions:
- CPU: 24 processor * Core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
- RAM: 32GB
- Disk: 14*4TB
(14disks *4TB *6 data nodes= 84 OSDs)
To optimize Ceph Cluster, *I adjusted
On 07/17/2013 05:49 PM, Josh Durgin wrote:
[please keep replies on the list]
On 07/17/2013 04:04 AM, Gaylord Holder wrote:
On 07/16/2013 09:22 PM, Josh Durgin wrote:
On 07/16/2013 06:06 PM, Gaylord Holder wrote:
Now whenever I try to map an RBD to a machine, mon0 complains:
feature set m
Hi,
sorry as all my mons were down with the same error - i was in a hurry
made sadly no copy of the mons and workaround by hack ;-( but i posted a
log to pastebin with debug mon 20. (see last email)
Stefan
Am 19.07.2013 17:14, schrieb Sage Weil:
On Fri, 19 Jul 2013, Stefan Priebe - Profihost
Yeah, that's a known bug with the stats collection. I think I heard
Sam discussing fixing it earlier today or something.
Thanks for mentioning it. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Jul 17, 2013 at 4:53 PM, Mikaël Cluseau wrote:
> Hi list,
>
> not a rea
Did you do "ceph-deploy new" before you started?
On Friday, July 19, 2013, wrote:
> Hello,
>
> I’ve deployed a Ceph cluster consisting of 5 server nodes and a Ceph
> client that will hold the mounted CephFS.
>
> The cephclient serves as admin too, and from that node I want to deploy
> the 5 serv
I'm by no means an expert, but from what I understand you do need to stick to
numbering from zero if you want things to work out in the long term. Is there
a chance that the cluster didn't finish bringing things back up to full
replication before osd's were removed?
If I were moving from 0,1
Hi.
I'm trying to understand the reason behind some of my unclean pages, after
moving some OSDs around. Any help would be greatly appreciated.I'm sure we
are missing something, but can't quite figure out what.
[root@ip-10-16-43-12 ec2-user]# ceph health detail
HEALTH_WARN 29 pgs degraded; 68 pgs
On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim wrote:
> hey folks, I was hoping to be able to use xfs on top of RBD for a
> deployment of mine. And was hoping for the resize of the RBD
> (expansion, actually, would be my use case) in the future to be as
> simple as a "resize on the fly", follo
hey folks, I'm hoping to be able to use xfs on top of RBD for a
deployment of mine. And was hoping for the resize of the RBD
(expansion, actually, would be my use case) in the future to be as
simple as a "resize on the fly", followed by an 'xfs_growfs'.
I just found a recent post, though
(http://l
HI,
On 07/19/13 07:16, Dan van der Ster wrote:
and that gives me something like this:
2013-07-18 21:22:56.546094 mon.0 128.142.142.156:6789/0 27984 : [INF]
pgmap v112308: 9464 pgs: 8129 active+clean, 398
active+remapped+wait_backfill, 3 active+recovery_wait, 933
active+remapped+backfilling, 1 a
On Fri, Jul 19, 2013 at 8:09 AM, ker can wrote:
>
> With ceph is there any way to influence the data block placement for a
> single file ?
AFAIK, no... But, this is an interesting twist. New files written out
to HDFS, IIRC, will by default store 1 local and 2 remote copies. This
is great for MapR
23 matches
Mail list logo