Hi flink users,
I used three dependencies in my pom.xml in this
order:kubernetes-client、flink-clients_2.11、flink-kubernetes_2.11. when i
execute
DeploymentList deploymentList =
client.apps().deployments().inNamespace(namespace).list();
throw exception see below.
how can i solve this excep
Hi, RonI want to parse the function(unix_timestamp, from_unixtime etc.) during the task runtime.
Best regards,Xianxun
Replied Message
Hey Jacqlyn,
According to the stack trace, it seems that there is a problem when
the checkpoint is triggered. Is this the problem after the restore?
would you like to share some logs related to restoring?
Best,
Yanfei
Jacqlyn Bender via user 于2023年9月8日周五 05:11写道:
>
> Hey folks,
>
>
> We experien
Hi, Xianxun
Do you mean the unix_timestamp() is parsed to the time when the query is
compiled in streaming mode?
Best,
Ron
Xianxun Ye 于2023年9月7日周四 18:19写道:
> Hi Team,
>
> I want to Serialize the ResolvedExpression to String or byte[] and transmit
> it into LookupFunction, and parse it back to
Hey folks,
We experienced a pipeline failure where our job manager restarted and we
were for some reason unable to restore from our last successful checkpoint.
We had regularly completed checkpoints every 10 minutes up to this failure
and 0 failed checkpoints logged. Using Flink version 1.17.1.
Thanks to Jane for following up on this issue! +1 for adding it back first.
For the deprecation, considering that users aren't usually motivated to
upgrade to a major version (1.14, from two years ago, wasn't that old,
which may be
part of the reason for not receiving more feedback), I'd recommen
Apology.
The question is, from our understanding we do not need to implement the
counter for numRecordsOutPerSecond metric explicity in our codes? As this
metric is automatically exposed to prometheus for every txns that goes in
to asyncInvoke() of our RichAsyncFunction every single second?
I tr
+1 to fix it first.
I also agree to deprecate it if there are few people using it,
but this should be another discussion thread within dev+user ML.
In the future, we are planning to introduce user-defined-operator
based on the TVF functionality which I think can fully subsume
the UDTAG, cc @Timo
Hi,
The Flink Doc mentions flink-s3-fs-presto as the recommended one for
checkpointing. Do we have any more details on how this was concluded? I am
part of a platform team where we are deciding whether to support both
filesystems or not. This will be a very useful input for us.
Thanks in advance.
Hi,
I am new to flink. I am trying to write a job that updates the Keyed State
when a Broadcast Message is received in KeyedBroadcastProcessFunction.
I was wondering will the *ctx.applyToKeyedState* in the
processBroadCastElement will get completed before further messages are
processed in the *pro
Hi Nick,
Short (and somewhat superficial answer):
* (assuming your producer supports exactly once mode (e.g. Kafka))
* Duplicates should only ever appear when your job restarts after a hiccup
* However if you job is properly configured (checkpointing/Kafka
transactions) everything sh
Hi
i am configured with exactly ones
i see that flink producer send duplicate messages ( sometime few copies)
that consumed latter only ones by other application,
How can I avoid duplications ?
regards'
nick
Jung,
I don't want to sound unhelpful, but I think the best thing for you to do
is simply to try these different models in your local env.
It should be very easy to get started with the Kubernetes Operator on
Kind/Minikube (
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/doc
I don't have active knowledge of the Win compat but I know guys who use
that and I would do something like:
* Standalone AD server
* Create keytab for each user
* Mount it
* Start workload with "security.kerberos.login.keytab"
AFAIK there are similar tools on Win like MIT kerberos has if kinit is
Hi,
You cannot access the keyed state within #open(). It can only be
accessed under a keyed context ( a key is selected while processing an
element, e.g. #processElement).
Best,
Zakelly
On Thu, Sep 7, 2023 at 4:55 PM Krzysztof Chmielewski
wrote:
>
> Hi,
> I'm having a problem with my toy flink
Hi Krzysztof again,
Just for clarity … your sample code [1] tries to count the number of events per
key.
Assuming this is your intention?
Anyway your previous implementation initialized the keyed state keyCounterState
in the open function that is the right place to do this,
you just wouldn’t wa
Hi,
What's your question?
Best
Ron
patricia lee 于2023年9月7日周四 14:29写道:
> Hi flink users,
>
> I used Async IO (RichAsyncFunction) for sending 100 txns to a 3rd party.
>
> I check the runtimeContex that it has metric of numRecordsSent, we wanted
> to expose this metric to our prometheus server so
Hello Chen,
Thanks for your reply! I have further questions as following...
1. In case of non-reactive mode in Flink 1.18, if the autoscaler adjusts
parallelism, what is the difference by using 'reactive' mode?
2. In case if I use Flink 1.15~1.17 without autoscaler, is the difference
of using 'rea
Thanks,
that helped.
Regards,
Krzysztof Chmielewski
czw., 7 wrz 2023 o 09:52 Schwalbe Matthias
napisał(a):
> Hi Krzysztof,
>
>
>
> You cannot access keyed state in open().
>
> Keyed state has a value per key.
>
> In theory you would have to initialize per possible key, which is quite
> impracti
Hi,
I have a toy Flink job [1] where I have a KeyedProcessFunction
implementation [2] that also implements the CheckpointedFunction. My stream
definition has .keyBy(...) call as you can see in [1].
However when I'm trying to run this toy job I'm getting an exception
from CheckpointedFunction::init
Hi Krzysztof,
You cannot access keyed state in open().
Keyed state has a value per key.
In theory you would have to initialize per possible key, which is quite
impractical.
However you don’t need to initialize state, the initial state per key default
to the default value of the type (null for ob
Hi,
I'm having a problem with my toy flink job where I would like to access a
ValueState of a keyed stream. The Job setup can be found here [1], it is
fairly simple
env
.addSource(new CheckpointCountingSource(100, 60))
.keyBy(value -> value)
.process(new KeyCounter())
.addSink(new ConsoleSink());
22 matches
Mail list logo