Thanks Yang and Sean. I have couple of questions:
1) Suppose the scenario of , bringing back entire cluster,
a) In that case, at least one job manager out of HA group should be up
and running right? or
b) All the job managers fails, then also this works? In that case
please let me know t
After reading about FlinkKafkaProducer011 and 2PC function in FLINK, I know,
when snapshotState(),
preCommit currentTransaction.
add to the State.
when Checkpoint done and notifyCheckpointComplete(),
producer will commit currentTransaction to brokers.
when initializeState(),
restore from State.
c
Hi everyone,
I would like to reach out to the user who uses or is interested in
`ExternallyInducedSource` or `WithMasterCheckpointHook` interfaces.
The background of this survey is I'm doing some reworking of
`CheckpointCoordinator`. I encountered a problem that the semantics of
`MasterTriggerRes
Hi Vishwas
Sorry for the confusing, what Theo said previous is the meaning I want to
say. Previously, what I said is from Flink's side, if we do not restore
from checkpoint/savepoint, all the TMs will have no state, so the Job
starts from scratch.
Best,
Congxian
theo.diefent...@scoop-software.
Thanks everyone for your reply.
So far all the reply tend to option 1 (dropping Python 2 support in 1.10) and
will continue to hear if there are any other opinions.
@Jincheng @Hequn, you are right, things become more complicate if dropping
Python 2 support is performed after Python UDF has b
Thank you @Chesnay!
I also managed to pass arguments to a RichFilterFunction: new
MyFilterFunc(Integer threshold) by defining its constructor.
If there's a better way to pass arguments I'd appreciate it if you let me
know.
On Tue, 8 Oct 2019 at 19:58, Chesnay Schepler wrote:
> You can compute
Hi Sri,
I logged a cloudera ticket, as you recommended, and got help from their support
team, and was able to get my application running.
We had to “kinit” inside the shell action using a keytab in the following
format: “kinit primary/instance@REALM -kt primary.keytab”
The keytab file had to b
We see a very similar (if not the same) error running version 1.9 on
Kubernetes. So far what we have discovered is that a taskmanager gets
killed and a new one is created, but JM still thinks it needs to connect to
the old (now dead TM). I was even able to see the a taskmanager on the
same host
Hi Theo,
It is a single sequential stream.
If I read your response correctly, you are arguing that summing a bunch of
numbers is not much more computationally intensive than assigning
timestamps to those numbers, so if the latter has to be done sequentially
anyway, then why should the former be d
Sorry been away on leave. I'll check ASAP.
On Thu, 3 Oct 2019 at 20:52, Zili Chen wrote:
> Does the log you attached above come from a TaskManager Node? If so,
> what state is the Job node it tried to connect to? Did it crash?
>
> BTW, it would be helpful if you can attach more logs of TM and JM
Hi Filip, I don't really understand your problem here.
Do you have a source with a single sequential stream, where from time to time,
there is a barrier element? Or do you have a source like Kafka with multiple
partitions?
If you have case 2 with multiple partitions, what exactly do you mean by "
Hi Vishaws, With "from scratch", Congxian means that Flink won't load any state automatically and starts as if there was no state. Of course if the kafka consumer group already exists and you have configured Flink to start from group offsets if there is no state yet, it will start from the group of
hi all,
Any comments on best way to pass program arguments (lets say if we are
passing any credentials) securely to the flink job.
I found the way to hide them from the web ui. But still looking from
solution something like ,
Fetching it from form environment or some other source , so that we
Hi Yun,
Thanks. Apropos the keyBy partitioner, I first tried it directly with
.keyBy(x -> x.getId()). It is true that the items get evenly distributed
among the available task slots, but since there is a single item per key,
the aggregations that should be done in parallel become trivial, and the
Hi Congxian,
Thanks for getting back. Why would the streaming start from scratch if my
consumer group does not change ? I start from the group offsets :
env.addSource(consumer.setStartFromGroupOffsets()).name(source + "- kafka
source")
So when I restart the job it should consume from the last commi
Thanks a lot.
On Wed, Oct 9, 2019, 8:55 AM Chesnay Schepler wrote:
> Java 11 support will be part of Flink 1.10 (FLINK-10725). You can take the
> current master and compile&run it on Java 11.
>
> We have not investigated later Java versions yet.
> On 09/10/2019 14:14, Vishal Santoshi wrote:
>
>
Hi Filip,
As a whole, I also think to increase the parallelism of the reduce to
more than 1, we should use a parallel window to compute the partial sum and
then sum the partial sum with WindowAll.
For the assignTimestampAndWatermarks, From my side I think the current
us
hi all,
Any comments on best way to pass program arguments securely to the flink
job.
Regards,
Vivekanand.
Java 11 support will be part of Flink 1.10 (FLINK-10725). You can take
the current master and compile&run it on Java 11.
We have not investigated later Java versions yet.
On 09/10/2019 14:14, Vishal Santoshi wrote:
Thank you. A related question, has flink been tested with jdk11 or
above. ?
O
Thank you. A related question, has flink been tested with jdk11 or above. ?
On Tue, Oct 8, 2019, 5:18 PM Steven Nelson wrote:
>
> https://flink.apache.org/downloads.html#apache-flink-190
>
>
> Sent from my iPhone
>
> On Oct 8, 2019, at 3:47 PM, Vishal Santoshi
> wrote:
>
> where do I get the c
No, parameters stored in the global job parameters cannot be hidden.
Only options configured in flink-conf.yaml are hidden, iff their key
contains "password" or "secret".
On 09/10/2019 08:26, vivekanand yaram wrote:
Hello All,
I m just wondering , is there a way to hide the user configuratio
Hi,
After using Redis, why there need to care about eliminate duplicated data,
if you specify the same key, then Redis will do the deduplicate things.
Best,
Congxian
Fabian Hueske 于2019年10月2日周三 下午5:30写道:
> Hi,
>
> State is always associated with a single task in Flink.
> The state of a task c
22 matches
Mail list logo