Hi.
I need to compute an euclidian distance between an input Vector and a full
dataset stored in Cassandra and keep the n lowest value. The Cassandra
dataset is evolving (mutable). I could do this on a batch job, but i will
have to triger it each time and the input are more like a slow stream, but
Any thoughts on where to start with this error would be appreciated.
Caused by: java.lang.IllegalStateException: Could not find previous
entry with key: first event, value:
{"DEVICE_ID":f8a395a0-d3e2-11e8-b050-9779854d8172,"TIME_STAMP":11/15/2018
02:29:30.343
am,"TEMPERATURE":0.0,"HUMIDITY":"0.0"
Hi Devin,
Thanks for the pointer and it works!
But I have no permission to change the YARN conf in production environment by
myself and it would need an detailed
investigation of the Hadoop team to apply the new conf, so I’m still interested
in the difference between keeping and
not keeping c
Hello,
It sounds like surefire problem with latest Java:
https://issues.apache.org/jira/browse/SUREFIRE-1588
Alexey
From: Radu Tudoran
Sent: Wednesday, November 14, 2018 6:47 AM
To: user
Subject: flink build error
Hi,
I am trying to build flink 1.6 but cannot b
Hi,
I am trying to build flink 1.6 but cannot build it to run also the tests. Any
ideas of why the surefire error to run junits tests?
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on
project flink-test-utils-junit: ExecutionException:
Ok, thanks.
On Wed, Nov 14, 2018, 01:22 Chesnay Schepler wrote:
> This is intended. Increasing the scala version basically broke the
> scala-shell and we haven't had the time to fix it. It is thus only
> available with scala 2.11. I agree that the error message could be better
> though.
>
>
> On
Hi yelun,
Currently, there are no direct ways to dynamically load data and do join in
Flink-SQL, as a workaround you can implement your logic with an udtf. In
the udtf, you can load the data into a cache and update it according to
your requirement.
Best, Hequn
On Wed, Nov 14, 2018 at 10:34 AM ye
I tried to create a sample project, but couldn't reproduce the error! It
was working fine.
Turns out I was using wrong Tuple2 package in my client code :(
After fixing it, the code worked fine.
Thanks Till and Jiayi for your help!
Jayant Ameta
On Tue, Nov 13, 2018 at 4:01 PM Till Rohrmann wrot
There is a data stream of some records, Lets call them "input records".
Now, I want to partition this data stream by using keyBy(). I want
partitioning based on one or more fields of "input record", But the number
and type of fields are not fixed.
So, Kindly tell me how should I achieve this partit
Thanks Chesnay, but if user want to use connectors in scala shell, they
have to download it.
On Wed, Nov 14, 2018 at 5:22 PM Chesnay Schepler wrote:
> Connectors are never contained in binary releases as they are supposed t
> be packaged into the user-jar.
>
> On 14.11.2018 10:12, Jeff Zhang wro
Hi Ufuk,
Thanks for you reply!
I’m afraid that my case is different. Since the Flink on YARN application is
not exited, we do not
have an application exit code yet (but the job status is determined).
Best,
Paul Lam
> 在 2018年11月14日,16:49,Ufuk Celebi 写道:
>
> Hey Paul,
>
> It might be relat
Connectors are never contained in binary releases as they are supposed t
be packaged into the user-jar.
On 14.11.2018 10:12, Jeff Zhang wrote:
I don't see the jars of flink connectors in the binary release of
flink 1.6.1, so just want to confirm whether flink binary release
include these con
This is intended. Increasing the scala version basically broke the
scala-shell and we haven't had the time to fix it. It is thus only
available with scala 2.11. I agree that the error message could be
better though.
On 14.11.2018 03:44, Hao Sun wrote:
I do not see flink-scala-shell jar under f
The documentation you linked only applies if legacy mode is enabled.
The URL you used initially (i.e.
"/jobs/:jobid/savepoints/:triggerid:triggerid") is correct. My guess is
that either the JobID or triggerID is not correct.
On 13.11.2018 17:24, PedroMrChaves wrote:
Hello,
I am trying to ge
Did you have the WebUI open from a previous execution? If so then the UI
might still be requesting jobs from the previous job.
On 13.11.2018 08:01, 徐涛 wrote:
Hi Experts,
When I start Flink program in local, I found that the following
exception throws out, I do not know why it happens because i
I don't see the jars of flink connectors in the binary release of flink
1.6.1, so just want to confirm whether flink binary release include these
connectors. Thanks
--
Best Regards
Jeff Zhang
Hi Henry,
Flushing of buffered data in sinks should occur on two occasions - 1) when some
buffer size limit is reached or a fixed-flush interval is fired, and 2) on
checkpoints.
Flushing any pending data before completing a checkpoint ensures the sink has
at-least-once guarantees, so that shou
Hi,
Have you taken a look yet at this [1]? That is an example of writing a stream
to HBase.
Cheers,
Gordon
[1]
https://github.com/apache/flink/blob/master/flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseWriteStreamExample.java
On 11 November 2018 at 5:45
Hey Paul,
It might be related to this: https://github.com/apache/flink/pull/7004 (see
linked issue for details).
Best,
Ufuk
> On Nov 14, 2018, at 09:46, Paul Lam wrote:
>
> Hi Gary,
>
> Thanks for your reply and sorry for the delay. The attachment is the
> jobmanager logs after invoking th
Hi,
AFAIK, I don’t think there has been other discussions on this other than the
original document on secured data access for Flink [1].
Unfortunately, I’m also not knowledgeable enough to comment on how feasible it
would be to support MD5-Digest for authentication.
Maybe Eron (cc’ed) can chime
Hi Vijay,
I’m pretty sure that this should work with the properties that you provided,
unless the AWS Kinesis SDK isn’t working as expected.
What I’ve tested is that with those properties, the ClientConfiguration used to
build the Kinesis client has the proxy domain / host / ports etc. properly
Hi Steve,
I’m not sure what you mean by “replacing addSource with CSV String data”. Are
your Kinesis records CSV and you want to parse them into Events?
If so, you should be able to do that in the provided DeserializationSchema.
Cheers,
Gordon
On 9 November 2018 at 10:54:22 PM, Steve Bistline
22 matches
Mail list logo