With the legacy FlinkKafkaConsumer, overriding the isEndOfStream method of
DeserializationSchema can solve the problem.
But the new KafkaSource ignores the method (never been called), and it
seems the setUnbounded method only accepts offset or time.
f parseExpression defined in ExpressionParser class.
> ---Original---
> *From:* "LIU Xiao"
> *Date:* Mon, Aug 2, 2021 17:29 PM
> *To:* "user";
> *Subject:* How can I customize (BigDecimal) MathContext of
> flink-table-planner-blink?
>
> After upgrade Flink f
After upgrade Flink from 1.6 to 1.13 (flink-table-planner-blink), the
result of our program changed:
before: 10.38288597, after: 10.38288600
We used to use "tableEnv.config().setDecimalContext(new
MathContext(MathContext.DECIMAL128.getPrecision(), RoundingMode.DOWN))"
with Flink 1.6, but flink-tab
esultStream.addSink(new SinkFunction() {
@Override
public void invoke(Row value, Context context) {
LOGGER.info("SinkFunction.invoke(): value={}", value);
}
});
env.execute();
}
}
Ingo Bürk 于2021年7月30日周五 下午1:51写道:
&
.map((MapFunction, Row>) value ->
value.f1);
resultStream.addSink(new SinkFunction() {
@Override
public void invoke(Row value, Context context) {
LOGGER.info("SinkFunction.invoke(): value={}", value);
}
});
env.exec
I'm currently converting our old code (based on Flink 1.6) to Flink 1.13
and encountered a strange problem about the user-defined aggregate function
which takes BigDecimal as the parameter and output:
Exception in thread "main" org.apache.flink.table.api.ValidationException:
> SQL validation faile
Thank you for timely help!
I've tried session mode a little bit, it's better than I thought, the
TaskManager can be allocated and de-allocated dynamically. But it seems the
memory size of TaskManager is fixed when the session starts, and can not be
adjusted for different job.
I'll try to deploy a