[ 
https://issues.apache.org/jira/browse/FLINK-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14577160#comment-14577160
 ] 

ASF GitHub Bot commented on FLINK-1297:
---------------------------------------

Github user tammymendt commented on the pull request:

    https://github.com/apache/flink/pull/605#issuecomment-109990676
  
    Hey! 
    So I solved the NullPointer and as far as I can see it is not bothering any 
more. Given the non-deterministic nature of the algorithms, some tests fail 
sometimes. I am looking into this. I am considering  leaving out tests for the 
accuracy of the algorithms (in the Accumulator test), given that they are 
already programmed in other test classes. This should avoid these "algorithm 
accuracy" related errors in the OperatorStatisticsAccumulatorTest class.
    
    Also, I have been refactoring this code to avoid using the conditionals 
that check which statistics are being collected in the "process" and "merge" 
methods. This is based on a remark that @fhueske made way earlier in the PR. 
Should I try and make a PR for the refactored code, or rather leave this in its 
current state, and wait until it is merged until I make a PR for the refactored 
version?
    
    Cheers!


> Add support for tracking statistics of intermediate results
> -----------------------------------------------------------
>
>                 Key: FLINK-1297
>                 URL: https://issues.apache.org/jira/browse/FLINK-1297
>             Project: Flink
>          Issue Type: Improvement
>          Components: Distributed Runtime
>            Reporter: Alexander Alexandrov
>            Assignee: Alexander Alexandrov
>             Fix For: 0.9
>
>   Original Estimate: 1,008h
>  Remaining Estimate: 1,008h
>
> One of the major problems related to the optimizer at the moment is the lack 
> of proper statistics.
> With the introduction of staged execution, it is possible to instrument the 
> runtime code with a statistics facility that collects the required 
> information for optimizing the next execution stage.
> I would therefore like to contribute code that can be used to gather basic 
> statistics for the (intermediate) result of dataflows (e.g. min, max, count, 
> count distinct) and make them available to the job manager.
> Before I start, I would like to hear some feedback form the other users.
> In particular, to handle skew (e.g. on grouping) it might be good to have 
> some sort of detailed sketch about the key distribution of an intermediate 
> result. I am not sure whether a simple histogram is the most effective way to 
> go. Maybe somebody would propose another lightweight sketch that provides 
> better accuracy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to