Hi,
It seems like you’ve misunderstood how to use the FlinkKafkaProducer, or is
there any specific reason why you want to emit elements to Kafka in a map
function?
The correct way to use it is to add it as a sink function to your pipeline, i.e.
DataStream someStream = …
someStream.addSink(new
Hi
Thanks for your suggestion. I ll try this one.:)
-Udhay.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Joining-two-aggregated-streams-tp14123p14289.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Na
I'm getting a NullPointerException when calling
KakfaProducer010.processElement(StreamRecord). Specifically, this comes
from its helper function invokeInternally(), and the function's
internalProducer not being configured properly, resulting in passing a null
value to one its helper functions.
We'
Is there a way to accomplish this for the batch operations?
On Thu, Jul 13, 2017 at 4:59 AM, Timo Walther wrote:
> Hi Mohit,
>
> do you plan to implement a batch or streaming job? If it is a streaming
> job: You can use a connected stream (see [1], Slide 34). The static data is
> one side of the
Hi all, I am currently running a flink session on YARN and try to access the
counter to get the infomation for CPU and memory used for my job. However,
flink job does not look like other mapReduce job shown on YARN thus I cannot
find the counter infos for it. Is there any way I can find those
It will work if you assign a new uid to the Kafka source.
Gyula
On Fri, Jul 14, 2017, 18:42 Tzu-Li (Gordon) Tai wrote:
> One thing: do note that `FlinkKafkaConsumer#setStartFromLatest()` does not
> have any effect when starting from savepoints.
> i.e., the consumer will still start from whateve
One thing: do note that `FlinkKafkaConsumer#setStartFromLatest()` does not have
any effect when starting from savepoints.
i.e., the consumer will still start from whatever offset is written in the
savepoint.
On 15 July 2017 at 12:38:10 AM, Tzu-Li (Gordon) Tai (tzuli...@apache.org) wrote:
Can yo
Can you try starting from the savepoint, but telling Kafka to start from the
latest offset?
(@gordon: Is that possible in Flink 1.3.1 or only in 1.4-SNAPSHOT ?)
This is already possible in Flink 1.3.x.
`FlinkKafkaConsumer#setStartFromLatest()` would be it.
On 15 July 2017 at 12:33:53 AM, Steph
Can you try starting from the savepoint, but telling Kafka to start from
the latest offset?
(@gordon: Is that possible in Flink 1.3.1 or only in 1.4-SNAPSHOT ?)
On Fri, Jul 14, 2017 at 11:18 AM, Kien Truong
wrote:
> Hi,
>
> Sorry for the version typo, I'm running 1.3.1. I did not test with 1.2.
Hi!
I am looping in Stefan and Xiaogang who worked a lot in incremental
checkpointing.
Some background on incremental checkpoints: Incremental checkpoints store
"pieces" of the state (RocksDB ssTables) that are shared between
checkpoints. Hence it naturally uses more files than no-incremental
che
Hi Aljoscha
thanks for the comment.
is wrapping by a PurgingTrigger.of() the same as doing "return
TriggerResult.FIRE_AND_PURGE;"
inside of a custom trigger?
gave it a test and the result seems the opposite of what I meant...
instead of throwing away previous windows' contents, I wanna keep them
Hi
I have several implementations of my Model trait,
trait Model {
def score(input : AnyVal) : AnyVal
def cleanup() : Unit
def toBytes() : Array[Byte]
def getType : Long
}
neither one of them are serializable, but are used in the state definition.
So I implemented custom serializer
imp
Hi,
I am trying to upgrade my project from Flink 1.2 to 1.3 and getting problems
while trying to run Flink server from my Intellij project. The code
// Execute on the local Flink server - to test queariable state
def executeServer() : Unit = {
// We use a mini cluster here for sake of simplic
There’s a bit of a misconception here: in Flink there is no “driver” as there
is in spark and the entry point of your program (“main()”) is not executed on
the cluster but in the “client”. The main method is only responsible for
constructing a program graph, this is then shipped to the cluster a
Hi,
Sorry for the version typo, I'm running 1.3.1. I did not test with 1.2.x.
The jobs runs fine with almost 0 back-pressure if it's started from scratch or
if I reuse the kafka consumers group id without specifying the safe point.
Best regards,
Kien
On Jul 14, 2017, 15:59, at 15:59, Stephan E
This kind of error almost always hints at a dependency clash, i.e. there is
some version of this code in the class path that clashed with the version that
the Flink program uses. That’s why it works in local mode, where there are
probably not many other dependencies and not in cluster mode.
How
Hi!
Flink 1.3.2 does not yet exist. Do you mean 1.3.1 or latest master?
Can you tell us whether this occurs only in 1.3.x and worked well in 1.2.x?
If you just keep the job running without savepoint/restore, you do not get
into backpressure situations?
Thanks,
Stephan
On Fri, Jul 14, 2017 at 1
Hi,
I have seen this again yesterday, now with some logging it looks like
acquiring the lock took all the time. In this case it was pretty clear that
the job started falling behind a few minutes before starting the checkpoint
so backpressure seems to be the culprit.
Thanks,
Gyula
Stephan Ewen e
18 matches
Mail list logo