rds
Konstantin
--
Konstantin Gregor * konstantin.gre...@tngtech.com
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Dr. Robert Dahlke, Gerhard Müller
Amtsgericht München, HRB 135082
l need to interface with S3 for data.
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Konstantin Gregor * konstantin.gre...@tngtech.com
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Hen
s. However Flink Web
> Dashboard reports 0 for sent/received bytes and records (see attached
> file for a snapshot). Any thoughts?
>
> *Hadoop platform*: Hortonworks Data Platform 2.5
> *Flink*: flink-1.2.0-bin-hadoop27-scala_2.10
>
> Thanks,
> Mohammad
--
Konstantin Gregor *
es
> as a work around. Sorry for all the trouble with this.
>
> In version >1.2 we don't need the user code any more to dispose savepoints.
>
> – Ufuk
>
>
> On Wed, Mar 29, 2017 at 11:50 AM, Konstantin Gregor
> wrote:
>> Hi Ufuk, hi Stefan,
>>
>&
Hi Ufuk, hi Stefan,
thanks a lot for your replies.
Ufuk, we are using the HDFS state backend.
Stefan, I installed 1.1.5 on our machines and built our software with
the Flink 1.1.5 dependency, but the problem remains. Below are the logs
for savepoint creation [1] and savepoint disposal [2] as we
org.apache.flink.runtime.checkpoint.savepoint.FsSavepointStore.disposeSavepoint(FsSavepointStore.java:151)
--
Konstantin Gregor * konstantin.gre...@tngtech.com
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
signature.asc
Hi Ufuk,
thanks for this information, this is good news!
Updating Flink to 1.1 is not really in our hands, but will hopefully
happen soon :-)
Thank you and best regards
Konstantin
On 26.10.2016 16:07, Ufuk Celebi wrote:
> On Wed, Oct 26, 2016 at 3:06 PM, Konstantin Gregor
> wrote:
>
this already with Flink? If so
> are there any examples of how to
> do this replay & switchover
> (rebuild state by consuming from
>
failed, since those old jobs
needed classes that didn't even exist anymore.
Cleaning up the state backend and the ZooKeeper links to the job graphs
in the state backend did the trick and everything works as expected now.
Thanks again for your input and best regards
Konstantin Gregor
On 12.07
HDFS, maybe this is also important to know.
Thank you and best regards
Konstantin
--
Konstantin Gregor * konstantin.gre...@tngtech.com
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsge
10 matches
Mail list logo