Skip,

I noticed a similar error message, but just with one of my peers about
a week ago. I did not notice an extremely high load, though (maybe just
didn't watch closely enough?).

I had looked at some of the read length numbers, then, looking for a
potential source of the problem. Here's what I wrote then, in the hope
that it might help locating the problem:

> * A third option might be that some proxy (or similar) is
> intercepting the request and its answer is misinterpreted. However,
> the numbers (45083906, 41977124, 57924048) do not look like ASCII
> text misinterpreted as binary. However, 1764832326 (the 1.7GB number)
> is 0x69313446, which could be "i14F" (or, little endian, "F41i");
> 1577058304, the 1.5GB number, is 0x5E000000 (0x5E is "^"), which also
> looks like a field mismatch (and not a ptree corruption).
> 
(Given that it is multiple nodes now, the "weird proxy" is probably no
longer the cause. However, it is worth noting that some of the lengths
do look non-random. Maybe reading from uninitialized memory (or
whatever the equivalent is in Go)?

-Marcel

Am Montag, dem 25.10.2021 um 09:34 -0700 schrieb Skip Carter:
> A couple of months ago I had an incident where hockeypuck ran away
> resulting in
> a CPU load over 50.  I firewalled accces to the http port temporarily
> and the
> load subsided.  This has happened a couple times since then again
> today.
> 
> I found this in the logs (which I find frustratingly inadequate):
> 
> error[] recon with xxx.xxx.xxx.xxx:36506 failed         error=read
> length
> 804813932 exceeds maximum limit
> 
> The remote address is more than one of my peers.
> 
> Has anyone else had CPU runaway issues ?  What is the cause and the
> cure ?
> 

Reply via email to