Gytis Zilinskas created FLINK-8259:
--
Summary: Release flink 1.4 docker image on dockerhub
Key: FLINK-8259
URL: https://issues.apache.org/jira/browse/FLINK-8259
Project: Flink
Issue Type: Imp
Hi Jan!
One could implement the RocksDB ListState like you suggested.
We did it the current way because that pattern is actually quite efficient
if you list fits into memory - The list append is constant and the list
access is the first time the values are concatenated. Especially for
typical win
Hi!
This may be due to the changed classloading semantics.
Just to verify this, can you check if it gets solved by setting the
following in the Flink configuration: "classloader.resolve-order: parent-
first"
By default, Flink 1.4 uses now inverted classloading to allow users to use
their own cop
Can you give a bit more context?
When does the error occur?
Does it happen in the client (i.e., when the query is optimized and the
plan is generated) or in the JobManager or TaskManager when the plan is
submitted to the cluster?
Do you try to start from a savepoint?
Thank you,
Fabian
2017-12-1
Fabian Hueske created FLINK-8260:
Summary: Document API of Kafka 0.11 Producer
Key: FLINK-8260
URL: https://issues.apache.org/jira/browse/FLINK-8260
Project: Flink
Issue Type: Improvement
Hi,
I tried to reproduce the problem given your description Ryan. I submitted
the test job to a vanilla Flink 1.4.0 cluster (Hadoop-free version
downloaded from flink.apache.org, Hadoop 2.7 version donwloaded from
flink.apache.org and a cluster built from sources). However, I was not able
to repro
Stephan Ewen created FLINK-8261:
---
Summary: Typos in the shading exclusion for jsr305 in the
quickstarts
Key: FLINK-8261
URL: https://issues.apache.org/jira/browse/FLINK-8261
Project: Flink
Iss
Till Rohrmann created FLINK-8262:
Summary:
IndividualRestartsConcurrencyTest.testLocalFailureFailsPendingCheckpoints fails
on Travis
Key: FLINK-8262
URL: https://issues.apache.org/jira/browse/FLINK-8262
@Shivam and @Ryan:
My first feeling would be the following: You have the Scala library in your
user code, and thus through the reversed class loading, the scala function
types get duplicated.
The right way to fix that is to make sure you build a proper jar file
without any provided dependencies.
Stephan Ewen created FLINK-8263:
---
Summary: Wrong packaging of flink-core in scala quickstarty
Key: FLINK-8263
URL: https://issues.apache.org/jira/browse/FLINK-8263
Project: Flink
Issue Type: Bu
Stephan Ewen created FLINK-8264:
---
Summary: Add Scala to the parent-first loading patterns
Key: FLINK-8264
URL: https://issues.apache.org/jira/browse/FLINK-8264
Project: Flink
Issue Type: Improv
Hi Stephan,
Thanks for your help. Basically reverted the classloading to parent
first, *resolved
this issue*. Thanks for this but I have one question:
I am building a fat jar without any dependency as Provided. And in my case
I am using proto-java version 3.4.0 but I think fink uses pretty old
v
Hi Stephen,
yes, definitely. I have put together a POC implementation that seems to
work for my use-case (not yet tested for performance, though). A have
put together a PR, just for discussion of the topic, here:
https://github.com/datadrivencz/flink/pull/1/
I know, that the PR doesn't follo
Eron Wright created FLINK-8265:
---
Summary: Missing jackson dependency for flink-mesos
Key: FLINK-8265
URL: https://issues.apache.org/jira/browse/FLINK-8265
Project: Flink
Issue Type: Bug
14 matches
Mail list logo