hi, I have a problem that Flink deletes checkpoint information on
kubernetes HA setup even if
execution.checkpointing.externalized-checkpoint-retention=RETAIN_ON_CANCELLATION
is set.
config documentation:
"RETAIN_ON_CANCELLATION": Checkpoint state is kept when the owning job is
cancelled or fails.
hi, how do I implement AsyncLookupFunctions correctly? I implemented a
AsyncLookupFunction, the eval function has the following signature:
https://nightlies.apache.org/flink/flink-docs-release-1.17/api/java/org/apache/flink/table/functions/AsyncLookupFunction.html
eval(CompletableFuture> future, Ob
There's friction with using scala/java protobuf and trying to convert them
into a Flink Table from a DataStream[ProtobufObject].
Scenario:
Input is a DataStream[ProtobufObject] from a kafka topic that we read data
from and deserialise into Protobuf objects (scala case classes or
alternatively Java
pplication breaks even though I am only using what I think is quite
innocent code?
On Fri, Feb 3, 2023 at 4:52 PM Clemens Valiente
wrote:
>
> Hi, I have been struggling with this particular Exception for days and
> thought I'd ask for help here.
>
> I am using a KeyedProcessF
Hi, I have been struggling with this particular Exception for days and
thought I'd ask for help here.
I am using a KeyedProcessFunction with a
private lazy val state: ValueState[Feature] = {
val stateDescriptor = new
ValueStateDescriptor[Feature]("CollectFeatureProcessState",
createTypeInfo
Hi everyone
I noticed when going through the scala datastream/table api bridge in my
IDE I cannot see the source of the code. I believe it is because the
Sources are missing on maven:
https://repo1.maven.org/maven2/org/apache/flink/flink-table-api-scala-bridge_2.12/1.15.2/
If you have a look at t
on you see is not coming from Datadog. They occur because,
> based on the configured scope formats, metrics from different jobs running
> in the same JobManager resolve to the same name (the standby jobmanger is
> irrelevant). Flink rejects these metrics, because if were to send these out
Hi,
we are using datadog as our metrics reporter as documented here:
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/metric_reporters/#datadog
our jobmanager scope is
metrics.scope.jm: flink.jobmanager
metrics.scope.jm.job: flink.jobmanager
since datadog doesn't all
Is it possible to rename execution stages from the Table API? Right now the
entire select transformation appears in plaintext in the task name so the
log entries from ExecutionGraph are over 10,000 characters long and the log
files are incredibly difficult to read.
for example a simple selected fie
s to be a bug. I will open an issue once I get you
> feedback. We might simply throw an exception for top-level usage then.
>
> Regards,
> Timo
>
>
>
> On 14.07.21 06:33, Clemens Valiente wrote:
> > Hi,
> >
> > we created a new AggregateFunction with Accumula
Hi,
we created a new AggregateFunction with Accumulator as Mapview as follows
class CountDistinctAggFunction[T] extends AggregateFunction[lang.Integer,
MapView[T, lang.Integer]] {
override def createAccumulator(): MapView[T, lang.Integer] = {
new MapView[T, lang.Integer]()
}
...
We had
11 matches
Mail list logo