Thank you Till. I was in a time crunch, and rebuilt my cluster from the
ground up with hadoop installed. All works fine now, `netstat -pn | grep
6123` shows flink's pid. Hadoop may be irrelevant, I can't rule out PEBKAC
yet :-). Sorry, when I have time I'll attempt to reproduce the scenario, on
the
Hi all,
I’ve got a very specialized DB (runs in the JVM) that I need to use to both
keep track of state and generate new records to be processed by my Flink
streaming workflow. Some of the workflow results are updates to be applied to
the DB.
And the DB needs to be partitioned.
My initial app
Hi All,
I have a use case where in need to create multiple source streams from
multiple files and monitor the files for any changes using the "
FileProcessingMode.PROCESS_CONTINUOUSLY"
Intention is to achieve something like this(have a monitored stream for
each of the multiple files), something l
Hi Shannon!
The stack trace you pasted is independent of checkpointing - it seems to be
from the regular processing. Does this only happen when checkpoints are
activated?
Can you also share which checkpoint method you use?
- FullyAsynchronous
- SemiAsynchronous
I think there are two possibil
It appears that when one of my jobs tries to checkpoint, the following
exception is triggered. I am using Flink 1.1.1 in Scala 2.11. RocksDB
checkpoints are being saved to S3.
java.lang.RuntimeException: Error while adding data to RocksDB
at
org.apache.flink.contrib.streaming.state.Rock
I think I'll probably end with submitting the job through YARN in order to
have a more standard approach :)
Thanks,
Flavio
On Wed, Sep 28, 2016 at 5:19 PM, Maximilian Michels wrote:
> I meant that you simply keep the sampling jar on the machine where you
> want to sample. However, you mentioned
Great to hear!
On Wed, Sep 28, 2016 at 5:18 PM, Simone Robutti
wrote:
> Solved. Probably there was an error in the way I was testing. Also I
> simplified the job and it works now.
>
> 2016-09-27 16:01 GMT+02:00 Simone Robutti :
>>
>> Hello,
>>
>> I'm dealing with an analytical job in streaming an
I meant that you simply keep the sampling jar on the machine where you
want to sample. However, you mentioned that it is a requirement for it
to be on the cluster.
Cheers,
Max
On Tue, Sep 27, 2016 at 3:18 PM, Flavio Pompermaier
wrote:
> Hi max,
> that's exactly what I was looking for. What do yo
Solved. Probably there was an error in the way I was testing. Also I
simplified the job and it works now.
2016-09-27 16:01 GMT+02:00 Simone Robutti :
> Hello,
>
> I'm dealing with an analytical job in streaming and I don't know how to
> write the last part.
>
> Actually I want to count all the el
Hi!
This was a temporary regression on the snapshot that has been fixed a few
days ago. It should be in the snapshot repositories by now.
Can you check if the problem persists if you force an update of your
snapshots dependencies?
Greetings,
Stephan
On Tue, Sep 27, 2016 at 5:04 PM, Timo Walth
Hi Stephan,
That should be great. Let me know once the fix is done and the
snapshot version to use, I'll check and revert then.
Can you also share the JIRA that tracks the issue?
With regards to offset commit issue, I'm not sure as to how to proceed
here. Probably I'll use your fix
Hey! Any update on this?
On Mon, Sep 5, 2016 at 11:29 AM, Aljoscha Krettek wrote:
> Hi,
> which version of Flink are you using? Are the checkpoints being reported as
> successful in the Web Frontend, i.e. in the "checkpoints" tab of the running
> job?
>
> Cheers,
> Aljoscha
>
> On Fri, 2 Sep 2016
Hey Dayong,
can you check the logs of the Flink cluster on the virtual machine?
The client side (what you posted) looks ok.
– Ufuk
On Wed, Sep 14, 2016 at 3:52 PM, Dayong wrote:
> Hi folks,
> I need to run a java app to submit a job to remote flink cluster. I am
> testing with the code at
> htt
Hey Anchit,
the usual recommendation for this is to use a CoMap/CoFlatMap
operator, where the second input are the lookup location changes. You
can then use this input to update the location.
Search for CoMap/CoFlatMap here:
https://ci.apache.org/projects/flink/flink-docs-master/dev/datastream_ap
Hey Paulo! I think it's not possible out of the box at the moment, but
you can try the following as a work around:
1) Create a custom OutputFormat that extends TextOutputFormat and
override the clean up method:
public class NoCleanupTextOutputFormat extends TextOutputFormat {
@Override
p
Hi All,
*Brief:* I have a use case where I need to interact with a running flink
application.
*Detail:*
My Flink application has a *Kafka source*, *an operator processing on the
content received* from the Kafka stream(this operator is using a lookup
from an external source file to accomplish the
16 matches
Mail list logo