Thank you for your reply.
I had over 190 GB left on the disk where the mon was residing. I deployed
with fuel initially and didn¹t check how the mon was configured.
I am re-deploying now and will check what is configured out of the box by
fuel 4.1
Maybe it was pointing somewhere different than
/
Well, that's no mon crash.
On 04/04/2014 06:06 PM, Karol Kozubal wrote:
Anyone know why this happens? What datastore fills up specifically?
The monitor's. Your monitor is sitting on a disk that is filling up.
The monitor will check for available disk space to make sure it has
enough to work
Anyone know why this happens? What datastore fills up specifically?
2014-04-04 17:01:51.277954 mon.0 [WRN] reached concerning levels of available
space on data store (16% free)
2014-04-04 17:03:51.279801 7ffd0f7fe700 0 monclient: hunting for new mon
2014-04-04 17:03:51.280844 7ffd0d6f9700 0 -- 19
On 04/16/2013 12:01 AM, Dan Mick wrote:
Two is a strange choice for number of monitors; you really want an odd
number. With two, if either one fails (or you have a network fault),
the cluster is dead because there's no majority.
That said, we certainly don't expect monitors to die when the netw
I'd bet that's 3495, it looks and sounds really, really similar. A lot
of the devs are at a conference, but if you see Joao on IRC he'd know
for sure.
On 04/15/2013 04:56 PM, Craig Lewis wrote:
>
> I'm doing a test of Ceph in two colo facilities. Since it's just a
> test, I only have 2 VMs runn
Two is a strange choice for number of monitors; you really want an odd
number. With two, if either one fails (or you have a network fault),
the cluster is dead because there's no majority.
That said, we certainly don't expect monitors to die when the network
fault goes away. Searching the bug
I'm doing a test of Ceph in two colo facilities. Since it's just a
test, I only have 2 VMs running, one in each colo. Both VMs are runing
mon, mds, a single osd, and the RADOS gw. Cephx is disabled. I'm
testing if the latency between the two facilities (~20ms) is low enough
that I can run
Hi Joao,
thanks for catching that up.
-martin
On 28.03.2013 20:03, Joao Eduardo Luis wrote:
>
> Hi Martin,
>
> As John said in his reply, these should be reported to ceph-devel (CC'ing).
>
> Anyway, this is bug #4519 [1]. It was introduced after 0.58, released
> under 0.59 and is already fix
On 03/28/2013 01:03 AM, Martin Mailand wrote:
Hi,
today one of my mons crashed, the log is here.
http://pastebin.com/ugr1fMJR
I think the most important part is:
2013-03-28 01:57:48.564647 7fac6c0ea700 -1
auth/none/AuthNoneServiceHandler.h: In function 'virtual int
AuthNoneServiceHandler::handl
Martin,
A crash is usually a bug. You should report this to the ceph-devel list.
On Wed, Mar 27, 2013 at 6:03 PM, Martin Mailand wrote:
> Hi,
>
> today one of my mons crashed, the log is here.
> http://pastebin.com/ugr1fMJR
>
> I think the most important part is:
> 2013-03-28 01:57:48.564647 7fa
Hi,
today one of my mons crashed, the log is here.
http://pastebin.com/ugr1fMJR
I think the most important part is:
2013-03-28 01:57:48.564647 7fac6c0ea700 -1
auth/none/AuthNoneServiceHandler.h: In function 'virtual int
AuthNoneServiceHandler::handle_request(ceph::buffer::list::iterator&,
ceph::b
11 matches
Mail list logo