Hadoop Metrics2 should emit Float.MAX_VALUE (instead of Double.MAX_VALUE) to 
avoid making Ganglia's gmetad core
---------------------------------------------------------------------------------------------------------------

                 Key: HADOOP-8052
                 URL: https://issues.apache.org/jira/browse/HADOOP-8052
             Project: Hadoop Common
          Issue Type: Bug
          Components: metrics
    Affects Versions: 1.0.0, 0.23.0
            Reporter: Varun Kapoor
            Assignee: Varun Kapoor


Ganglia's gmetad converts the doubles emitted by Hadoop's Metrics2 system to 
strings, and the buffer it uses is 256 bytes wide.

When the SampleStat.MinMax class (in org.apache.hadoop.metrics2.util) emits its 
default min value (currently initialized to Double.MAX_VALUE), it ends up 
causing a buffer overflow in gmetad, which causes it to core, effectively 
rendering Ganglia useless (for some, the core is continuous; for others who are 
more fortunate, it's only a one-time Hadoop-startup-time thing).

The fix needed to Ganglia is simple - the buffer needs to be bumped up to be 
512 bytes wide, and all will be well - but instead of requiring a minimum 
version of Ganglia to work with Hadoop's Metrics2 system, it might be more 
prudent to just use Float.MAX_VALUE.

An additional problem caused in librrd (which Ganglia uses beneath-the-covers) 
by the use of Double.MIN_VALUE (which functions as the default max value) is an 
underflow when librrd runs the received strings through libc's strtod(), but 
the librrd code is good enough to check for this, and only emits a warning - 
moving to Float.MIN_VALUE fixes that as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to