Hi Chengcheng Zhang,
Is this your scene? For example, every day is divided into 12 hours,
let’s take today as an example, 2020081600 2020081601,...2020081623
For example, if we count pv, we can count like this
INSERT INTO cumulative_pv
SELECT time_str, count(1)
FROM pv_per_hour
GROUP
Hi,
I'm a new user of Flink, and have been puzzled a lot by the time-based window
aggregation result.
For our business, hourly and daily reports have to been created best in a real
time style. So, I used a event-time based window aggregation to consume
the Kafka data stream, but found that, o
Hi Aaron,
I'm not too sure about tracing and Flink. It's the first time I heard about
it in this context and I'm not immediately seeing the benefit of it.
What is imho more interesting and a well-formed discipline in the science
of data quality is a concept called data lineage. [1]
I can go quit
Hi Benchao,
I include ['json.timestamp-format.standard' = 'ISO-8601'] to table's DDL
but it does not work with slightly different errors:
1. TIMESTAMP WITH LOCAL TIME ZONE
Exception in thread "main"
org.apache.flink.table.client.SqlClientException: Unexpected exception.
This is a bug. Please con
Hi
I try to set execution.attached: false in flink-conf.yaml,but yarn logs is true
like this "2020-08-15 09:40:13,489 INFO
org.apache.flink.configuration.GlobalConfiguration - Loading configuration
property: execution.attached, true".Can you tell me why,I see the code the
default value is fals
Hi Youngwoo,
What version of Flink and Json Format are you using?
>From 1.11, we introduced `json.timestamp-format.standard` to declare the
timestamp format.
You can try `timestamp with local zone` data type with `ISO-8601` timestamp
format.
Youngwoo Kim (김영우) 于2020年8月15日周六 下午12:12写道:
> Hi,
>