Maybe the isConverged() method is never called? For making that sure, just
throw a RuntimeException inside the method to see whats happening.

On Wed, Jan 6, 2016 at 3:58 PM, Ana M. Martinez <a...@cs.aau.dk> wrote:

> Hi Till,
>
> I am afraid it does not work in any case.
>
> I am following the steps you indicate on your websites (for yarn
> configuration and loggers with slf4j):
>
> 1) Enable log aggregation in yarn-site:
>
> https://ci.apache.org/projects/flink/flink-docs-release-0.7/yarn_setup.html#log-files
>
> 2) Include Loggers as indicated here (see WordCountExample below):
>
> https://ci.apache.org/projects/flink/flink-docs-release-0.7/internal_logging.html
>
> But I cannot get the log messages that run in the map functions. Am I
> missing something?
>
> Thanks,
> Ana
>
> On 04 Jan 2016, at 14:00, Till Rohrmann <trohrm...@apache.org> wrote:
>
> I think the YARN application has to be finished in order for the logs to
> be accessible.
>
> Judging from you commands, you’re starting a long running YARN application
> running Flink with ./bin/yarn-session.sh -n 1 -tm 2048 -s 4. This cluster
> won’t be used though, because you’re executing your job with ./bin/flink
> run -m yarn-cluster which will start another YARN application which is
> only alive as long as the Flink job is executed. If you want to run your
> job on the long running YARN application, then you simply have to omit -m
> yarn-cluster.
>
> Cheers,
> Till
> ​
>
> On Mon, Jan 4, 2016 at 12:36 PM, Ana M. Martinez <a...@cs.aau.dk> wrote:
>
>> Hi Till,
>>
>> Sorry for the delay (Xmas break). I have activated log aggregation
>> on flink-conf.yaml with yarn.log-aggregation-enable: true (as I can’t find
>> a yarn-site.xml).
>> But the command yarn logs -applicationId application_1451903796996_0008
>> gives me the following output:
>>
>> INFO client.RMProxy: Connecting to ResourceManager at xxx
>> /var/log/hadoop-yarn/apps/hadoop/logs/application_1451903796996_0008does
>> not exist.
>> Log aggregation has not completed or is not enabled
>>
>>
>> I’ve tried to restart the Flink JobManager and TaskManagers as follows:
>> ./bin/yarn-session.sh -n 1 -tm 2048 -s 4
>> and then with a detached screen, run my application with ./bin/flink run
>> -m yarn-cluster ...
>>
>> I am not sure if my problem is that I am not setting the
>> log-aggregation-enable property well or I am not restarting the Flink
>> JobManager and TaskManagers as I should… Any idea?
>>
>> Thanks,
>> Ana
>>
>> On 18 Dec 2015, at 16:29, Till Rohrmann <trohrm...@apache.org> wrote:
>>
>> In which log file are you exactly looking for the logging statements? And
>> on what machine? You have to look on the machines on which the yarn
>> container were started. Alternatively if you have log aggregation
>> activated, then you can simply retrieve the log files via yarn logs.
>>
>> Cheers,
>> Till
>>
>> On Fri, Dec 18, 2015 at 3:49 PM, Ana M. Martinez <a...@cs.aau.dk> wrote:
>>
>>> Hi Till,
>>>
>>> Many thanks for your quick response.
>>>
>>> I have modified the WordCountExample to re-reproduce my problem in a
>>> simple example.
>>>
>>> I run the code below with the following command:
>>> ./bin/flink run -m yarn-cluster -yn 1 -ys 4 -yjm 1024 -ytm 1024 -c
>>> mypackage.WordCountExample ../flinklink.jar
>>>
>>> And if I check the log file I see all logger messages except the one in
>>> the flatMap function of the inner LineSplitter class, which is actually the
>>> one I am most interested in.
>>>
>>> Is that an expected behaviour?
>>>
>>> Thanks,
>>> Ana
>>>
>>> import org.apache.flink.api.common.functions.FlatMapFunction;
>>> import org.apache.flink.api.java.DataSet;
>>> import org.apache.flink.api.java.ExecutionEnvironment;
>>> import org.apache.flink.api.java.tuple.Tuple2;
>>> import org.apache.flink.util.Collector;
>>> import org.slf4j.Logger;
>>> import org.slf4j.LoggerFactory;
>>>
>>> import java.io.Serializable;
>>> import java.util.ArrayList;
>>> import java.util.List;
>>>
>>> public class WordCountExample {
>>>     static Logger logger = LoggerFactory.getLogger(WordCountExample.class);
>>>
>>>     public static void main(String[] args) throws Exception {
>>>         final ExecutionEnvironment env = 
>>> ExecutionEnvironment.getExecutionEnvironment();
>>>
>>>         logger.info("Entering application.");
>>>
>>>     DataSet<String> text = env.fromElements(
>>>                 "Who's there?",
>>>                 "I think I hear them. Stand, ho! Who's there?");
>>>
>>>         List<Integer> elements = new ArrayList<Integer>();
>>>         elements.add(0);
>>>
>>>
>>>         DataSet<TestClass> set = env.fromElements(new TestClass(elements));
>>>
>>>         DataSet<Tuple2<String, Integer>> wordCounts = text
>>>                 .flatMap(new LineSplitter())
>>>                 .withBroadcastSet(set, "set")
>>>                 .groupBy(0)
>>>                 .sum(1);
>>>
>>>         wordCounts.print();
>>>
>>>
>>>     }
>>>
>>>     public static class LineSplitter implements FlatMapFunction<String, 
>>> Tuple2<String, Integer>> {
>>>
>>>         static Logger loggerLineSplitter = 
>>> LoggerFactory.getLogger(LineSplitter.class);
>>>
>>>         @Override
>>>         public void flatMap(String line, Collector<Tuple2<String, Integer>> 
>>> out) {
>>>             loggerLineSplitter.info("Logger in LineSplitter.flatMap");
>>>             for (String word : line.split(" ")) {
>>>                 out.collect(new Tuple2<String, Integer>(word, 1));
>>>             }
>>>         }
>>>     }
>>>
>>>     public static class TestClass implements Serializable {
>>>         private static final long serialVersionUID = -2932037991574118651L;
>>>
>>>         static Logger loggerTestClass = 
>>> LoggerFactory.getLogger("WordCountExample.TestClass");
>>>
>>>         List<Integer> integerList;
>>>         public TestClass(List<Integer> integerList){
>>>             this.integerList=integerList;
>>>             loggerTestClass.info("Logger in TestClass");
>>>         }
>>>
>>>
>>>     }
>>> }
>>>
>>>
>>>
>>>
>>> On 17 Dec 2015, at 16:08, Till Rohrmann <trohrm...@apache.org> wrote:
>>>
>>> Hi Ana,
>>>
>>> you can simply modify the `log4j.properties` file in the `conf`
>>> directory. It should be automatically included in the Yarn application.
>>>
>>> Concerning your logging problem, it might be that you have set the
>>> logging level too high. Could you share the code with us?
>>>
>>> Cheers,
>>> Till
>>>
>>> On Thu, Dec 17, 2015 at 1:56 PM, Ana M. Martinez <a...@cs.aau.dk> wrote:
>>>
>>>> Hi flink community,
>>>>
>>>> I am trying to show log messages using log4j.
>>>> It works fine overall except for the messages I want to show in an
>>>> inner class that implements
>>>> org.apache.flink.api.common.aggregators.ConvergenceCriterion.
>>>> I am very new to this, but it seems that I’m having problems to show
>>>> the messages included in the isConverged function, as it runs in the task
>>>> managers?
>>>> E.g. the log messages in the outer class (before map-reduce operations)
>>>> are properly shown.
>>>>
>>>> I am also interested in providing my own log4j.properties file. I am
>>>> using the ./bin/flink run -m yarn-cluster on Amazon clusters.
>>>>
>>>> Thanks,
>>>> Ana
>>>
>>>
>>>
>>>
>>
>>
>
>

Reply via email to