... and BTW, I know it's my fault that I haven't done the mds newfs, but
I think it would be better to print an error rather that going in core
dump with a trace.
Just my eur 0.02 :)
Cheers,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.c
Hi Greg,
just for your own information, ceph mds newfs has disappeared from the
help screen of the "ceph" command and it was a nightmare to understand
the syntax (that has changed)... luckily sources were there :)
For the "flight log":
ceph mds newfs --yes-i-really-mean-it
Cheers,
Gippa
___
On Wed, May 29, 2013 at 11:20 PM, Giuseppe 'Gippa' Paterno'
wrote:
> Hi Greg,
>> Oh, not the OSD stuff, just the CephFS stuff that goes on top. Look at
>> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00029.html
>> Although if you were re-creating pools and things, I think that would
>>
Hi Greg,
> Oh, not the OSD stuff, just the CephFS stuff that goes on top. Look at
> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00029.html
> Although if you were re-creating pools and things, I think that would
> explain the crash you're seeing.
> -Greg
>
I was thinking about that
On Tue, May 28, 2013 at 2:35 PM, Giuseppe 'Gippa' Paterno'
wrote:
> Hi Greg,
>> Do I correctly assume that you don't have any CephFS data in the cluster yet?
> The funny thing this was a fresh installation.
> Just for your information, ceph-deploy didn't worked for me and I had to
> do all the ope
Hi Greg,
> Do I correctly assume that you don't have any CephFS data in the cluster yet?
The funny thing this was a fresh installation.
Just for your information, ceph-deploy didn't worked for me and I had to
do all the operations manually.
I recreated one of the two ceph clusters with bobtail, sam
On Thu, May 23, 2013 at 2:43 PM, Giuseppe 'Gippa' Paterno'
wrote:
> Hi!
>
> I've got a cluster of two nodes on Ubuntu 12.04 with cuttlefish from the
> ceph.com repo.
> ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
>
> The MDS process is dying after a while with a stack trace, but