gt; What is currently the error you observe? It might help to clear
> org.apache.flink in the ivy cache once in a while.
>
> Cheers,
> Till
>
> On Wed, Feb 24, 2016 at 6:09 PM, Cory Monty
> wrote:
>
>> We're still seeing this issue in the latest SNAPSHOT version. Do
Just wanted to let you know about the error we
> seeing with the snapshot version.
>
> Thanks!
>
> —Dan
>
> On Fri, Feb 12, 2016 at 8:41 AM, Cory Monty
> wrote:
>
>> Thanks, Stephan.
>>
>> Everything is back to normal for us.
>>
>> Cheers,
&
t;>> Previous snapshot:
>>> https://repository.apache.org/content/repositories/snapshots/org/apache/flink/flink-test-utils_2.11/1.0-SNAPSHOT/flink-test-utils_2.11-1.0-20160211.162913-286.pom
>>>
>>> Latest Snapshot:
>>> https://repository.apache.o
should really solve this version conflict pain.
> If we are fast tomorrow, there may be a nice surprise coming up in the
> next days...
>
> Greetings,
> Stephan
>
>
> On Thu, Feb 11, 2016 at 10:24 PM, Cory Monty
> wrote:
>
>> Hmm. We don't explicitly incl
> "flink-core" and "flink-annotations" should not have Scala suffixes,
> because they do not depend on Scala.
>
> So far, we mark the Scala independent projects without suffixes. Is that
> very confusing, or does that interfere with build tools?
>
> Gree
As of this afternoon, SBT is running into issues compiling with the
following error:
[error] Modules were resolved with conflicting cross-version suffixes in
[error]org.scalatest:scalatest _2.10, _2.11
[error]org.apache.flink:flink-core _2.11,
[error]org.apache.flink:flink-annotations
solution that I could
>> suggest is to do it the way we have it for the batch api, namely having a
>> scala version for DataStreamUtils too. It might be placed under
>> flink-contrib for the time being.
>>
>> Would that solution fit your needs?
>&
Hey there,
We were using DataStreamUtils.collect in Scala for automated testing, which
only works because of `DataStream.getJavaStream` accessor in the Scala
version of `DataStream`. However, a recent commit (
https://github.com/apache/flink/commit/086acf681f01f2da530c04289e0682c56f98a378)
removed
> [1] https://github.com/apache/flink/pull/1458
> [2]
> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html
>
> > On 15 Dec 2015, at 00:28, Cory Monty
> wrote:
> >
> > Ufuk,
> >
> > I'm a colleague of Brian. Unfortunately, we
Ufuk,
I'm a colleague of Brian. Unfortunately, we are not running YARN so I don't
think that PR applies to us. We're trying to run a standalone cluster.
Cheers,
Cory
On Mon, Dec 14, 2015 at 5:23 PM, Ufuk Celebi wrote:
> This has been recently added to the YARN client by Robert [1]:
> https://
phan Ewen wrote:
> >>>
> >>> Flink's own asm is 5.0, but the Kryo version used in Flink bundles
> >>> reflectasm with a dedicated asm version 4 (no lambdas supported).
> >>>
> >>> Might be as simple as bumping the kryo
the issue. Scala should run independently of the
> > Java version. We are already using ASM version 5.0.4. However, some
> > code uses the ASM4 op codes which don't seem to be work with Java 8.
> > This needs to be fixed. I'm filing a JIRA.
> >
> > Cheers,
Is it possible to use Scala 2.11 and Java 8?
I'm able to get our project to compile correctly, however there are runtime
errors with the Reflectasm library (I'm guessing due to Kyro). I looked
into the error and it seems Spark had the same issue (
https://issues.apache.org/jira/browse/SPARK-6152,
to the lib directory of Flink, and start your
> program with the RemoteExecutor, without a jar attachment. Then it only
> needs to communicate to the actor system (RPC) port, which is not random in
> standalone mode (6123 by default).
>
> Stephan
>
>
>
>
> On Tue, Nov
I'm also running into an issue with a non-YARN cluster. When submitting a
JAR to Flink, we'll need to have an arbitrary port open on all of the
hosts, which we don't know about until the socket attempts to bind; a bit
of a problem for us.
Are there ways to submit a JAR to Flink that bypasses the n
15 matches
Mail list logo