Hi Alex,
this is a bug in the `0.10` release. Is it possible for you to switch to
version `1.0-SNAPSHOT`. With this version, the error should no longer occur.
Cheers,
Till
On Tue, Jan 5, 2016 at 1:31 AM, Alex Rovner
wrote:
> Hello Flinkers!
>
> The below program produces the following error wh
Hi Filip!
There are thoughts and efforts to extend Flink to push the result
statistics of Flink jobs to the YARN timeline server. That way, you can
explore jobs that are completed.
Since the whole web dashboard in Flink has a pure REST design, this is a
quite straightforward fix.
>From the capac
Hi,
these are certainly valid use cases. As far is I know, the people who know most
in this area are on vacation right now. They should be back in a week, I think.
They should be able to give you a proper description of the current situation
and some pointers.
Cheers,
Aljoscha
> On 04 Jan 2016,
Hi Sebastian,
I’m afraid the people working on Flink don’t have much experience with
Cassandra. Maybe you could look into the Elasticsearch sink and adapt it to
write to Cassandra instead. That could be a valuable addition to Flink.
Cheers,
Aljoscha
> On 22 Dec 2015, at 14:36, syepes wrote:
>
Hi,
I’m afraid this is not possible right now. I’m also not sure about the Evictors
as a whole. Using them makes window operations very slow because all elements
in a window have to be kept, i.e. window results cannot be pre-aggregated.
Cheers,
Aljoscha
> On 15 Dec 2015, at 12:23, Radu Tudoran
Hi everyone,
Regarding Q1, I believe I have witnessed a comparable phenomenon in a (3-node,
non-EMR) YARN cluster. After shutting down the yarn session via `stop`, one
container seems to linger around. `yarn application -list` is empty, whereas
`bin/yarn-session.sh -q` lists the left-over conta
Hi!
Concerning (1) We have seen that a few times. The JVMs / Threads do
sometimes not properly exit in a graceful way, and YARN is not always able
to kill the process (YARN bug). I am currently working on a refactoring of
the YARN resource manager (to allow to easy addition of other frameworks)
an
Hi Sebastian,
I've had preliminary success with a steaming job that is Kafka -> Flink ->
HBase (actually, Phoenix) using the Hadoop OutputFormat adapter. A little
glue was required but it seems to work okay. My guess it it would be the
same for Cassandra. Maybe that can get you started?
Good luck
Hi,
Collecting results locally (e.g., for unit testing) is possible in the
DataSet API by using "LocalCollectionOutputFormat", as described in
the programming guide:
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html#collection-data-sources-and-sinks
Can some
Hi Filipe,
You can take a look at `DataStreamUtils.collect` in
flink-contrib/flink-streaming-contrib.
Best,
Gábor
2016-01-05 16:14 GMT+01:00 Filipe Correia :
> Hi,
>
> Collecting results locally (e.g., for unit testing) is possible in the
> DataSet API by using "LocalCollectionOutputFormat", a
Thanks Till for the info. I tried switching to 1.0-SNAPSHOT and now facing
another error:
Caused by: java.lang.NoSuchMethodError:
org.apache.flink.streaming.api.operators.StreamingRuntimeContext.isCheckpointingEnabled()Z
at
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKa
I think the problem is that you only set the version of the Kafka connector
to 1.0-SNAPSHOT, not for the rest of the Flink dependencies.
On Tue, Jan 5, 2016 at 6:18 PM, Alex Rovner
wrote:
> Thanks Till for the info. I tried switching to 1.0-SNAPSHOT and now facing
> another error:
>
> Caused by:
Hi Gábor, Thanks!
I'm using Scala though. DataStreamUtils.collect() depends on
org.apache.flink.streaming.api.datastream.DataStream, rather than
org.apache.flink.streaming.api.scala.DataStream. Any suggestion on how
to handle this, other than creating my own scala implementation of
DataStreamUtils
Try the getJavaStream method of the scala DataStream.
Best,
Gábor
2016-01-05 19:14 GMT+01:00 Filipe Correia :
> Hi Gábor, Thanks!
>
> I'm using Scala though. DataStreamUtils.collect() depends on
> org.apache.flink.streaming.api.datastream.DataStream, rather than
> org.apache.flink.streaming.ap
Hi Alex,
How recent is your Flink 1.0-SNAPSHOT build? Maybe the code on the (local)
cluster (in
/git/flink/flink-dist/target/flink-1.0-SNAPSHOT-bin/flink-1.0-SNAPSHOT/)
is not up to date?
I just tried it locally, and the job seems to execute:
./bin/flink run
/home/robert/Downloads/flink-poc/tar
Hi
I am seeing the following error when i am trying to run the jar in Flink
Cluster. I am not sure what dependency is missing.
/opt/DataHUB/flink-0.10.1/bin/flink run datahub-heka-1.0-SNAPSHOT.jar
flink.properties
java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
Hi,
Thanks for answering me!
It is happy to hear the problem will be addressed. :)
About question 2, flink-runtime uses Apache Httpclient 4.2.6 and S3 file system
api implemented by Amazon uses 4.3.x. There are some API changes, so
NoSuchMethodError exception occurs.
> On Jan 5, 2016, at 11:5
17 matches
Mail list logo