Unsubscribe me
I'm using Uber Open Source project Athenax. As mentioned in it's docs[1] it
supports `Auto scaling for AthenaX jobs`. I went through the source code on
Github but didn't find the auto scaling part. Can someone aware of this
project please point me in the right direction here.
I'm using Flink's
>From my understanding, having a fake keyBy (stream.keyBy(r => "dummyString"))
means there would be only one slot handling the data.
Would a broadcast function [1] work for your case?
Regards,
Averell
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html
One way I can think of is:
1. Apply a fake keyBy to the stream and let all the data generate the same key.
2. Use MapState in KeyedProcessFunction on the result of the keyBy above.
But is it a good solution? What's the implication to parallelism? Are there
better ways?
Hi, using Flink 1.8.0 and I have a Gradle project created using the cli
util.
When running inside the IDE the below works perfectly fine...
ClassLoader classLoader = ClassLoader.getSystemClassLoader();
System.out.println("Config: " +
classLoader.getResource(fileName).getFile());
But when Submitt
Hi all,
We are developing an application using Flink DataSet API focusing on generating
a CSV file from a dataset of POJOs using writeAsFormattedText and a custom
TextFormatter.
During the testing of our application, we observed that the files generated
consist of Unix line endings (i.e., '\n')
Hey Flink users,
Currently using Flink 1.7.2 with a job using FlinkKafkaProducer with its write
semantic set to Semantic.EXACTLY_ONCE. When there is a job failure and restart
(in our case from checkpoint timeout), it begins a failure loop that requires a
cancellation and resubmission to fix. Th
Hi Averell
Would you please share the Flink web graph UI to illustrate the change after
you append a map operator?
Best
Yun Tang
From: Le-Van Huyen
Sent: Monday, May 6, 2019 11:15
To: Yun Tang
Cc: user@flink.apache.org
Subject: Re: IllegalArgumentException with
I read byte data from Kafka. I use a class ProtoSchema
implemented DeserializationSchema
to get the actual java class. My question is that how can I transfer the
byte data to Row just by ProtoSchema? What if the data structure is nested?
Thank you.
Hello Flinkers,
I am experimenting a bit with DataSet API and I have written a simple
program that joins two (key, value) datasets by key. The server I am running
my experiments has 12 cores with 4 threads each, thus I have set the number
of slots for a TaskManager to 12x4=48 to leverage the full
I am already using jdk 1.8.
I have searched a lot over internet regarding how to fix this issue, but did
not get the solution.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Thanks for announcing the Call for Presentations here!
Since the deadline is approaching, I wanted to bump up this thread to
remind everybody to submit talks!
Please reach out to me or Fabian directly if you have any questions or if
you need any support!
On Thu, Apr 11, 2019 at 3:47 PM Fabian H
Hi Peter,
I also encountered this issue. As far as I know, it is not currently
possible to stream from files (or any bounded stream) into a
*StreamingFileSink*.
This is because files are rolled over only on checkpoints and NOT when the
stream closes. This is due to the fact that at the function le
13 matches
Mail list logo