Sessions online for #flinkforward SF. Register with 25% discount code.

2018-02-16 Thread Fabian Hueske
Hi Flink Community,

On behalf of the data Artisans team, I’d like to announce that the sessions
for Flink Forward San Francisco are now online!

Check out the great lineup of speakers from companies such as American
Express, Comcast, Capital One, eBay, Google, Lyft, Netflix, Uber, Yelp, and
others. https://sf-2018.flink-forward.org/conference/

The conference on Monday April 10 will kick off with keynotes from leaders
of the Flink community followed by technical sessions with topics range
from production Flink use cases, to Apache Flink® internals, to the growth
of the Flink ecosystem. Also, on April 9 a full day of Standard and
Advanced Flink Training sessions will take place - more information is
coming soon on the website .

Registration is open so sign up to claim your spot. We’re offering you 25%
off which is a special discount only for members of the Flink mailing lists
- please don’t tweet the code or share outside the Flink mailing lists. :-)
Use this limited promotion code - MailingListFFSF - when registering for
the conference
.


Hope to see you in San Francisco!


Fabian


[jira] [Created] (FLINK-8668) Remove "hadoop classpath" from config.sh

2018-02-16 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-8668:
---

 Summary: Remove "hadoop classpath" from config.sh
 Key: FLINK-8668
 URL: https://issues.apache.org/jira/browse/FLINK-8668
 Project: Flink
  Issue Type: New Feature
Reporter: Aljoscha Krettek


Automatically adding this when available can lead to dependency problems for 
some users and there is no way of turning of this "feature". It was added to 
make using Flink on AWS/EMR and GCE a bit easier but I think it's causing more 
harm than good.

If users want to to augment the classpath they can always {{export 
HADOOP_CLASSPATH=...}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Why are checkpoint failures so serious?

2018-02-16 Thread Aljoscha Krettek
Hi,

I think there's currently no option for achieving this on Flink 1.4.x.

Best,
Aljoscha

> On 15. Feb 2018, at 18:11, Ron Crocker  wrote:
> 
> Thanks Till and Aljoscha. Are there good options for 1.4? I’d rather not fork 
> to get this, but I’ll do it if I have to.
> 
> Ron
> 
>> On Feb 14, 2018, at 2:43 AM, Aljoscha Krettek  wrote:
>> 
>> Hi Ron,
>> 
>> Keep in mind, though, that this feature will only be available with the 
>> upcoming Flink 1.5. Just making sure you don't go looking for this and are 
>> surprised if you don't find it.
>> 
>> Best,
>> Aljoscha
>> 
>> 
>>> On 14. Feb 2018, at 10:20, Till Rohrmann  wrote:
>>> 
>>> Hi Ron,
>>> 
>>> you should be able to turn off the Task failure in case of a checkpoint
>>> failure by setting `ExecutionConfig.setFailTaskOnCheckpointError(false)`.
>>> This setting should change the behavior such that checkpoint failures will
>>> simply fail the distributed checkpoint.
>>> 
>>> Cheers,
>>> Till
>>> 
>>> On Tue, Feb 13, 2018 at 11:41 PM, Ron Crocker  wrote:
>>> 
 What would it take to be a little more flexible in handling checkpoint
 failures?
 
 Right now I have a team that’s checkpointing into S3, via the
 FsStateBackend and an appropriate URL. Sometimes these checkpoints fail.
 They’re transient, though, and a retry would likely work.
 
 However, when they fail, their job exits and restarts from the last
 checkpoint. That’s fine, but I’d rather it tried again before failing, and
 even after failing just keep running and do another checkpoint. Maybe this
 is something that should be configurable - # of retries, failure strategy, 
 …
 
 Ron
>> 
> 



[jira] [Created] (FLINK-8669) Extend FutureUtils to have method to wait for the completion of all futures

2018-02-16 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-8669:


 Summary: Extend FutureUtils to have method to wait for the 
completion of all futures
 Key: FLINK-8669
 URL: https://issues.apache.org/jira/browse/FLINK-8669
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.5.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


For proper non blocking shut down, we need some additional future methods which 
allow to wait for the completion of a set of futures where each future can 
fail. Moreover, it would be helpful to have a method to schedule {{Runnables}} 
after the completion (potentially exceptional) of a future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8670) Make MetricRegistryImpl#shutdown non blocking

2018-02-16 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-8670:


 Summary: Make MetricRegistryImpl#shutdown non blocking 
 Key: FLINK-8670
 URL: https://issues.apache.org/jira/browse/FLINK-8670
 Project: Flink
  Issue Type: Improvement
  Components: Metrics
Affects Versions: 1.5.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


In order to better shut down multiple components concurrently, we should make 
all shutdown operation non-blocking if possible. This also includes the 
{{MetricRegistryImpl}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8671) Split documented default value if it is too long

2018-02-16 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-8671:
---

 Summary: Split documented default value if it is too long
 Key: FLINK-8671
 URL: https://issues.apache.org/jira/browse/FLINK-8671
 Project: Flink
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 1.5.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.0


Default values that are a long continuous string mess up the configuration 
table as we can see for the parent-first-patterns 
[https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#classloader-parent-first-patterns]

The generator should chunk the stringified default value into chunks for proper 
display.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8672) Support continuous processing in CSV table source

2018-02-16 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-8672:
---

 Summary: Support continuous processing in CSV table source
 Key: FLINK-8672
 URL: https://issues.apache.org/jira/browse/FLINK-8672
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8673) Don't let JobManagerRunner shut down itself

2018-02-16 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-8673:


 Summary: Don't let JobManagerRunner shut down itself
 Key: FLINK-8673
 URL: https://issues.apache.org/jira/browse/FLINK-8673
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.5.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


Currently, the {{JobManagerRunner}} is allowed to shut down itself in case of a 
job completion. This, however, can cause problems when the {{Dispatcher}} 
receives a request for a {{JobMaster}}. If the {{Dispatcher}} is not told about 
the shut down of the {{JobMaster}} then it might still try to send requests to 
it. This will lead to time outs.

It would be better to simply let the {{JobManagerRunner}} not shut down itself 
and defer it to the owner (the {{Dispatcher}}). We can do this by listening on 
the {{JobManagerRunner#resultFuture}} which is completed by the 
{{JobManagerRunner}} in case of a successful job completion or a failure. That 
way we could also get rid of the {{OnCompletionActions}} and the 
{{FatalErrorHandler}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8674) Efficiently handle alwaysFlush case (0ms flushTimeout)

2018-02-16 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-8674:
-

 Summary: Efficiently handle alwaysFlush case (0ms flushTimeout)
 Key: FLINK-8674
 URL: https://issues.apache.org/jira/browse/FLINK-8674
 Project: Flink
  Issue Type: Sub-task
Reporter: Piotr Nowojski
Assignee: Piotr Nowojski


Changes in data notifications introduced alongside BufferConsumer significantly 
degraded performance of 0ms flushTimeout.

 

Previously flushing after writing only record, effectively triggered only one 
data notification for the sub-partition/channel to which this record was 
written. With low latency improvements, this changed and flush is now 
triggering data notifications for all of the partitions. This can (and should) 
be easily fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8675) Make shut down of RestServerEndpoint non blocking

2018-02-16 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-8675:


 Summary: Make shut down of RestServerEndpoint non blocking
 Key: FLINK-8675
 URL: https://issues.apache.org/jira/browse/FLINK-8675
 Project: Flink
  Issue Type: Improvement
  Components: REST
Affects Versions: 1.5.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


In order to better shut down different components, it would be helpful if the 
{{RestServerEndpoint}} would not block when being shut down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8676) Memory Leak in AbstractKeyedStateBackend.applyToAllKeys() when backend is base on RocksDB

2018-02-16 Thread Sihua Zhou (JIRA)
Sihua Zhou created FLINK-8676:
-

 Summary: Memory Leak in AbstractKeyedStateBackend.applyToAllKeys() 
when backend is base on RocksDB
 Key: FLINK-8676
 URL: https://issues.apache.org/jira/browse/FLINK-8676
 Project: Flink
  Issue Type: Bug
Reporter: Sihua Zhou
Assignee: Sihua Zhou


`AbstractKeyedStateBackend.applyToAllKeys() ` uses backend's getKeys(stateName, 
namespace) to get all keys that belong to `namespace`. But, in 
`RocksDBKeyedStateBackend.getKeys()` if just return a object which wrap a 
`rocksdb iterator`, that is dangous, because rocksdb will ping the resources 
that belong to the iterator into memory untill iterator.close() is invoked, but 
it didn't invoked right now. This will lead to memory leak finally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8677) Make ClusterEntrypoint shut down non-blocking

2018-02-16 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-8677:


 Summary: Make ClusterEntrypoint shut down non-blocking
 Key: FLINK-8677
 URL: https://issues.apache.org/jira/browse/FLINK-8677
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.5.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


Make the {{ClusterEntrypoint}} shut down method non blocking. That way we don't 
have to use the common Fork-Join-Pool to shutDownAndTerminate the cluster 
entrypoint when the Dispatcher terminates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8678) Make AkkaRpcService#stopService non-blocking

2018-02-16 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-8678:


 Summary: Make AkkaRpcService#stopService non-blocking
 Key: FLINK-8678
 URL: https://issues.apache.org/jira/browse/FLINK-8678
 Project: Flink
  Issue Type: Sub-task
  Components: Distributed Coordination
Affects Versions: 1.5.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


In order to properly shut down the {{AkkaRpcService}} in a non-blocking 
fashion, we have to change the implementation of the 
{{AkkaRpcService#stopService}}. This would give us the benefit to enable 
non-blocking shut down of the components owning the {{AkkaRpcService}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8679) RocksDBKeyedBackend.getKeys(stateName, namespace) doesn't filter data with namespace

2018-02-16 Thread Sihua Zhou (JIRA)
Sihua Zhou created FLINK-8679:
-

 Summary: RocksDBKeyedBackend.getKeys(stateName, namespace) doesn't 
filter data with namespace
 Key: FLINK-8679
 URL: https://issues.apache.org/jira/browse/FLINK-8679
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Reporter: Sihua Zhou
Assignee: Sihua Zhou


Currently, `RocksDBKeyedBackend.getKeys(stateName, namespace)` is odds. It 
doesn't use the namespace to filter data. And 
`HeapKeyedBackend.getKeys(stateName, namespace)` has done that, I think they 
should be consistent at least.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8680) Name printing sinks by default.

2018-02-16 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-8680:
---

 Summary: Name printing sinks by default.
 Key: FLINK-8680
 URL: https://issues.apache.org/jira/browse/FLINK-8680
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.4.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.5.0


The sinks that pring to std. out and std. err show up as "Sink: Unnamed" in 
logs and the UI.

They should be named "Print to Std. Out" and "Print to Std. Err" by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8681) Remove planVisualizer.html move notice

2018-02-16 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-8681:
---

 Summary: Remove planVisualizer.html move notice
 Key: FLINK-8681
 URL: https://issues.apache.org/jira/browse/FLINK-8681
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Affects Versions: 1.4.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.5.0


The {{planVisualizer.html}} for optimizer plans is no longer in the Flink 
distribution, but we hold a notice there that the visualizer has moved to the 
website.

That notice has been there for many versions (since Flink 1.0) and can be 
removed now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8682) Make start/stop cluster scripts work without SSH for local HA setups

2018-02-16 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-8682:
---

 Summary: Make start/stop cluster scripts work without SSH for 
local HA setups
 Key: FLINK-8682
 URL: https://issues.apache.org/jira/browse/FLINK-8682
 Project: Flink
  Issue Type: Improvement
  Components: Startup Shell Scripts
Affects Versions: 1.4.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.5.0


The startup should work for purely local (testing) cluster without setups 
without SSH.

While the shell scripts handle this correctly for TaskManagers, they don't 
handle it correctly for JobManagers. As a consequence, {{start-cluster.sh}} 
does not work without SSH when high availability is enabled.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8683) Add test for configuration docs completeness

2018-02-16 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-8683:
---

 Summary: Add test for configuration docs completeness
 Key: FLINK-8683
 URL: https://issues.apache.org/jira/browse/FLINK-8683
 Project: Flink
  Issue Type: Improvement
  Components: Configuration, Documentation, Tests
Affects Versions: 1.5.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.0


We should add a test to make sure the configuration docs stay up-to-date.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8684) Rework MesosTaskManagerParameters#MESOS_RM_TASKS_SLOTS

2018-02-16 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-8684:
---

 Summary: Rework MesosTaskManagerParameters#MESOS_RM_TASKS_SLOTS
 Key: FLINK-8684
 URL: https://issues.apache.org/jira/browse/FLINK-8684
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Mesos
Affects Versions: 1.5.0
Reporter: Chesnay Schepler
 Fix For: 1.5.0


Currently, {{MesosTaskManagerParameters#MESOS_RM_TASKS_SLOTS}} mimics 
{{TaskManagerOptions#NUM_TASK_SLOTS}}:
{code:java}
public static final ConfigOption MESOS_RM_TASKS_SLOTS =
key(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS)
.defaultValue(1);

public static final ConfigOption NUM_TASK_SLOTS =
key("taskmanager.numberOfTaskSlots")
.defaultValue(1)
.withDescription("...");
{code}
This pattern is problematic as this creates 2 documentation entries for 
{{taskmanager.numberOfTaskSlots}} with different descriptions, and opens the 
potential for different defaults. Ultimately this causes the documentation to 
become ambiguous.

I thus propose to either outright remove this option or turn it into an actual 
alias:
{code:java}
public static final ConfigOption MESOS_RM_TASKS_SLOTS =
TaskManagerOptions.NUM_TASK_SLOTS;
{code}
As a side-effect of FLINK-8683 we can ensure that no differing config options 
exist for a given key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Multiple windows on a single stream

2018-02-16 Thread Carsten

Hello all,

for some of our sensor data we would like to aggregate data for 10sec, 
30sec, 1 min etc., thus conceptually have multiple windows on a single 
stream. Currently, I am simply duplicating the data stream (separate 
execution environments etc) and process each of the required windows. Is 
there a better way? I heard about cascading windows but I am not sure if 
this approach exits, needs to implemented from scratch,  or how to use it.



Any link/hint/suggestion, would be greatly appreciated.


Have a great day,

Carsten


[jira] [Created] (FLINK-8685) Code of method "processElement(Ljava/lang/Object;Lorg/apache/flink/streaming/api/functions/ProcessFunction$Context;Lorg/apache/flink/util/Collector;)V" of class "DataStre

2018-02-16 Thread Jahandar Musayev (JIRA)
Jahandar Musayev created FLINK-8685:
---

 Summary: Code of method 
"processElement(Ljava/lang/Object;Lorg/apache/flink/streaming/api/functions/ProcessFunction$Context;Lorg/apache/flink/util/Collector;)V"
 of class "DataStreamCalcRule$3069" grows beyond 64 KB
 Key: FLINK-8685
 URL: https://issues.apache.org/jira/browse/FLINK-8685
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, Table API & SQL
 Environment: Fedora 27
Reporter: Jahandar Musayev


I want to use DataStream API and Table API & SQL. I want to read data from 
Apache Kafka and transpose it using SQL. It throws the error below.

A version of this code for DataSet API works fine.

 
{noformat}
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
    at 
org.apache.flink.table.runtime.CRowProcessRunner.compile(CRowProcessRunner.scala:35)
    at 
org.apache.flink.table.runtime.CRowProcessRunner.open(CRowProcessRunner.scala:49)
    at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
    at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
    at 
org.apache.flink.streaming.api.operators.ProcessOperator.open(ProcessOperator.java:56)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:393)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:254)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Compiling "DataStreamCalcRule$3069": 
Code of method 
"processElement(Ljava/lang/Object;Lorg/apache/flink/streaming/api/functions/ProcessFunction$Context;Lorg/apache/flink/util/Collector;)V"
 of class "DataStreamCalcRule$3069" grows beyond 64 KB
    at org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:361)
    at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:234)
    at 
org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:446)
    at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:213)
    at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:204)
    at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
    at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
    at org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:33)
    ... 9 more
Caused by: org.codehaus.janino.JaninoRuntimeException: Code of method 
"processElement(Ljava/lang/Object;Lorg/apache/flink/streaming/api/functions/ProcessFunction$Context;Lorg/apache/flink/util/Collector;)V"
 of class "DataStreamCalcRule$3069" grows beyond 64 KB
    at org.codehaus.janino.CodeContext.makeSpace(CodeContext.java:974)
    at org.codehaus.janino.CodeContext.write(CodeContext.java:867)
    at org.codehaus.janino.UnitCompiler.writeOpcode(UnitCompiler.java:11753)
    at org.codehaus.janino.UnitCompiler.writeLdc(UnitCompiler.java:10512)
    at org.codehaus.janino.UnitCompiler.pushConstant(UnitCompiler.java:10280)
    at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5202)
    at org.codehaus.janino.UnitCompiler.access$8400(UnitCompiler.java:212)
    at 
org.codehaus.janino.UnitCompiler$12.visitIntegerLiteral(UnitCompiler.java:4073)
    at 
org.codehaus.janino.UnitCompiler$12.visitIntegerLiteral(UnitCompiler.java:4044)
    at org.codehaus.janino.Java$IntegerLiteral.accept(Java.java:5250)
    at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4044)
    at org.codehaus.janino.UnitCompiler.fakeCompile(UnitCompiler.java:3383)
    at org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5218)
    at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4813)
    at org.codehaus.janino.UnitCompiler.access$8200(UnitCompiler.java:212)
    at 
org.codehaus.janino.UnitCompiler$12.visitMethodInvocation(UnitCompiler.java:4071)
    at 
org.codehaus.janino.UnitCompiler$12.visitMethodInvocation(UnitCompiler.java:4044)
    at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:4874)
    at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4044)
    at org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5224)
    at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:3445)
    at org.codehaus.janino.UnitCompiler.access$5000(UnitCompiler.java:212)
    at 
org.codehaus.janino.UnitCompiler$9.visitMethodInvocation(UnitCompiler.java:3424)
    at 
org.codehaus.janino.UnitCompiler$9.visitMethodInvocation(UnitCompiler.java:3396)
    at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:4874)
    at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:3396)
    at org.co