Hi-
Has anyone looked into embedding apache siddhi into Flink.
Thanks,
Aparup
Hi Jack,
As Robert Metzger mentioned in a previous thread, there's an ongoing
discussion about the issue in this JIRA:
https://issues.apache.org/jira/browse/FLINK-3679.
A possible workaround is to use a SimpleStringSchema in the Kafka source,
and chain it with a flatMap operator where you can use
Hi all,
I have a custom deserializer which I pass to a Kafka source to transform
JSON string to Scala case class.
val events = env.addSource(new FlinkKafkaConsumer09[Event]("events",
new JsonSerde(classOf[Event], new Event), kafkaProp))
There are time when the JSON message is malformed, in wh
Ahh, sorry I misunderstood.
Your comment provided insight for me though.
To anyone else who is having issues, maybe the following will help them. I
was trying to deploy Flink on an IBM BigInsights Cloud cluster (disclaimer,
I work for IBM, not trying to promote a company, but they do give me fre
Hi Trevor,
I am seeing similar issue for a JIRA that I am working now. I am yet to trace
the Yarn Web UI code to find out how the "tracking URL" is being handled. To
ublock, you could use the tracking URL (Flink UI URL) directly to access Flink
Web UI to by-pass Yarn UI redirection. You can find
I decided it made the most sense to open up a new thread.
I am running Flink on a cluster behind a firewall. Things seem to be
working fine, but when I access the YARN web-ui and click on the flink
application-ui, i get the jobmanager ui, but it is broken.
It is a broken link to a flink image an
Hi Josh,
Thanks for the description. From your description and a check into the code,
I’m suspecting what could be happening is that before the consumer caught up to
the head of the stream, Kinesis was somehow returning the same shard iterator
on consecutive fetch calls, and the consumer kept o
Stephan,
Will the jobmanager-UI exist? E.g. if I am running Flink on YARN will I be
able to submit apps/see logs and DAGs through the web interface?
thanks,
tg
Trevor Grant
Data Scientist
https://github.com/rawkintrevo
http://stackexchange.com/users/3002022/rawkintrevo
http://trevorgrant.org
You too!
On Fri, Aug 26, 2016 at 4:15 PM, Niels Basjes wrote:
> Thanks!
> I'm going to work with this next week.
>
> Have a nice weekend.
>
> Niels
>
> On Fri, Aug 26, 2016 at 2:49 PM, Maximilian Michels wrote:
>>
>> It is a bit more involved as I thought. We could simply the API further:
>>
>>
Hi Gordon,
My job only went down for around 2-3 hours, and I'm using the default
Kinesis retention of 24 hours. When I restored the job, it got this
exception after around 15 minutes (and then restarted again, and got the
same exception 15 minutes later etc) - but actually I found that after this
Thanks!
I'm going to work with this next week.
Have a nice weekend.
Niels
On Fri, Aug 26, 2016 at 2:49 PM, Maximilian Michels wrote:
> It is a bit more involved as I thought. We could simply the API further:
>
> import org.apache.flink.client.program.PackagedProgram;
> import org.apache.flink.
That is a regression of upgrading Zeppelin to spark 2.0/Scala 2.11. as it
broke existing functionality, hopefully whoever did the upgrade will fix...
Please report to Zeppelin, thanks and good find!
On Aug 26, 2016 8:39 AM, "Frank Dekervel" wrote:
> Hello,
>
> i added this to my Dockerfile to en
Hello,
i added this to my Dockerfile to end up with a working setup:
RUN cp /opt/zeppelin/interpreter/ignite/scala*jar
/opt/zeppelin/interpreter/flink/
which would copy:
scala-compiler-2.11.7.jar
scala-library-2.11.7.jar
scala-parser-combinators_2.11-1.0.4.jar
scala-reflect-2.11.7.jar
scala-xml
Hi Josh,
Thank you for reporting this, I’m looking into it. There was some major changes
to the Kinesis connector after mid June, but the changes don’t seem to be
related to the iterator timeout, so it may be a bug that had always been there.
I’m not sure yet if it may be related, but may I ask
It is a bit more involved as I thought. We could simply the API further:
import org.apache.flink.client.program.PackagedProgram;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.configuration.GlobalConfiguration;
import org.apache.hadoop.fs.Path;
import java.io.File;
i
Hi all,
I guess this is probably a question for Gordon - I've been using the
Flink-Kinesis connector for a while now and seen this exception a couple of
times:
com.amazonaws.services.kinesis.model.ExpiredIteratorException:
Iterator expired. The iterator was created at time Fri Aug 26 10:47:47
UTC
16 matches
Mail list logo