Hi Jing,
Yes, FIRST is a UDAF.
I've been trying to reproduce this locally without success so far.
The query itself has more fields and aggregates. Once I can reproduce this
locally I'll try to narrow down the problematic field and share more
information.
On Tue, Jul 27, 2021, 05:17 JING ZHANG
Hi Mason,
In rocksDB, one state is corresponding to a column family and we could
aggregate all RocksDB native metrics per column family. If my understanding is
right, are you hoping that all state latency metrics for a particular state
could be aggregated per state level?
Best
Yun Tang
__
Hi, Ivan
My gut feeling is that it is related to FLINK-22535. Could @Yun Gao
take another look? If that is the case, you can upgrade to 1.13.1.
Best,
Yangze Guo
On Tue, Jul 27, 2021 at 9:41 AM Ivan Yang wrote:
>
> Dear Flink experts,
>
> We recently ran into an issue during a job cancellation a
Hi Yuval,
I run a similar SQL (without `FIRST` aggregate function), there is nothing
wrong.
`FIRST` is a custom aggregate function? Would you please check if there is
a drawback in `FIRST`? Whether the query could run without `FIRST`?
Best,
JING ZHANG
Yuval Itzchakov 于2021年7月27日周二 上午12:29写道:
>
Dear Flink experts,
We recently ran into an issue during a job cancellation after upgraded to 1.13.
After we issue a cancel (from Flink console or flink cancel {jobid}), a few
subtasks stuck in cancelling state. Once it gets to that situation, the
behavior is consistent. Those “cancelling tasks
We have been using the state backend latency tracking metrics from Flink
1.13. To make metrics aggregation easier, could there be a config to expose
something like `state.backend.rocksdb.metrics.column-family-as-variable`
that rocksdb provides to do aggregation across column families.
In this case
Hi,
*Setup:*
1 JM,
1 TM,
Flink 1.13.1
RocksDBStateBackend.
I have a query with the rough sketch of the following:
SELECT CAST(TUMBLE_START(event_time, INTERVAL '2' MINUTE) AS TIMESTAMP)
START_TIME
CAST(TUMBLE_END(event_time, INTERVAL '2' MINUTE) AS
TIMESTAMP) END_TIME
Hi,
It is recommended to package your application with all the
dependencies into a single file [1].
And according to the kafka-connector documentation [2]:
if you are using Kafka source, flink-connector-base is also required
as dependency:
org.apache.flink
flink-connector-base
VERSI
Hello,
Could you check that TMs didn't fail and therefore unregistered KV
states and are still running at the time of the query?
Probably after changing the memory settings there is another error
that is reported later than the state is unregistered.
Regards,
Roman
On Sat, Jul 24, 2021 at 12:50
Hello,
If you are consuming from a single stream, you can use the shard ID to
achieve a better distribution. Since the shard IDs are assigned
incrementally like so:
- shardId-
- shardId-0001
- shardId-0002
- etc
You can substring the prefix and convert
10 matches
Mail list logo