Hi
on 2019/8/28 13:03, Fabian Hueske wrote:
The Docker images for Flink 1.9.0 with Scala 2.12 are available now :-)
Nice work.
How to deploy a cluster with one master and multi-workers by docker image?
regards.
Hi everyone,
The Docker images for Flink 1.9.0 with Scala 2.12 are available now :-)
Cheers, Fabian
Oytun Tez schrieb am Di., 27. Aug. 2019, 21:18:
> Thank you, Fabian! We are migrating soon once 2.12 is available.
>
> Cheers,
>
> ---
> Oytun Tez
>
> *M O T A W O R D*
> The World's Fastest Hum
Hi again Fabian,
Thanks for pointing this out to me. In my case there is no need for keyed
writing - but I do wonder if having each kafka task write only to a single
partition would significantly affect performance.
Actually now that I think about it, the approach to just wait for the first
recor
Thank you, Fabian! We are migrating soon once 2.12 is available.
Cheers,
---
Oytun Tez
*M O T A W O R D*
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com
On Tue, Aug 27, 2019 at 8:11 AM Fabian Hueske wrote:
> Hi all,
>
> Flink 1.9 Docker images are availa
Hello everyone,
I have a Flink job that uses a Kinesis Data Stream as a sink.
producerConfig.put("CollectionMaxCount", "500"); // TODO: Move this into
an external property
FlinkKinesisProducer
kinesis=newFlinkKinesisProducer<>(newSimpleStringSchema(), producerConfig);
kinesis.setF
Thats a nice observation! :) I haven't caught that. We need to check that
for sure.
Gyula
On Tue, Aug 27, 2019 at 5:00 PM Kostas Kloudas wrote:
> Hi Guyla,
>
> Thanks for looking into it.
> I did not dig into it but in the trace you posted there is the line:
>
> Failed to TRUNCATE_FILE ... for
Hi Padarn,
Yes, this is quite tricky.
The "problem" with watermarks is that you need to consider how you write to
Kafka.
If your Kafka sink writes to keyed Kafka stream (each Kafka partition is
written by multiple producers), you need to broadcast the watermarks to
each partition, i.e., each parti
Hi Guyla,
Thanks for looking into it.
I did not dig into it but in the trace you posted there is the line:
Failed to TRUNCATE_FILE ... for
**DFSClient_NONMAPREDUCE_-1189574442_56** on 172.31.114.177 because
**DFSClient_NONMAPREDUCE_-1189574442_56 is already the current lease
holder**.
The client
Hi all!
I am gonna try to resurrect this thread as I think I have hit the same
issue with the StreamingFileSink:
https://issues.apache.org/jira/browse/FLINK-13874
I don't have a good answer but it seems that we try to truncate before we
get the lease (even though there is logic both in BucketingS
Hi Theo,
Re your first approach:
TUMBLE_START is treated as a special function.It can only be used in the
SELECT clause if there is a TUMBLE function call in the GROUP BY cause.
If you use FLOOR(s1.ts TO DAY) == FLOOR(s2.ts TO DAY) it should work.
You can also drop one of the BETWEEN predicates be
Hi Theo,
The work on custom triggers has been put on hold due to some major
refactorings (splitting the modules, porting Scala code to Java, new type
system, new catalog interfaces, integration of the Blink planner).
It's also not on the near-time roadmap AFAIK.
To be honest, I'm not sure how much
Hi,
We a have a bunch of Flink SQL queries running in our Flink environment. For
regular Table API interactions, we can override `uid` which also gives us an
indicative name for the thread/UI to look at. For Flink SQL queries, this
doesn't seem the the case which results in very verbose names (ess
Hi all,
Flink 1.9 Docker images are available at Docker Hub [1] now.
Due to some configuration issue, there are only Scala 2.11 issues at the
moment but this was fixed [2].
Flink 1.9 Scala 2.12 images should be available soon.
Cheers,
Fabian
[1] https://hub.docker.com/_/flink
[2]
https://github.
Hi Yun,
Thanks to your input, I’ve found out that states in WindowOperator are using
its window serializer as the namespace serializer, so the states in
TriggerContext can not be deserialized by the state processor API at the
moment, as it’s using VoidNamespaceSerializer. Indeed it does ring a
Hi Paul
Would you please share more information of the exception stack trace and the
state descriptor of this map state with that window operator?
For all user-facing keyed state, the namespace serializer would always be
VoidNamespaceSerializer. And only window state could have different name s
Hi,
I was using the new state processor api to read a savepoint produced by Flink
1.5.3, and got an StateMigrationException with message “For heap backends, the
new namespace serializer must be compatible”.
Concretely, the state I was trying to read is a MapState within a
WindowOperator(Trigge
Hi everyone,
I'm testing release 1.9 on MapR secure cluster. I took flink binaries from
download page and trying to start Yarn session cluster. All MapR specific
libraries and configs are added according to documentation.
When I start yarn-session without high availability, it uses zookeeper from
Hi team,
Anyone for help/suggestion, now we have stopped all input in kafka, there
is no processing, no sink but checkpointing is failing.
Is it like once checkpoint fails it keeps failing forever until job restart.
Help appreciated.
Thanks & Regards,
Sushant Sawant
On 23 Aug 2019 12:56 p.m., "S
18 matches
Mail list logo