Hi, you can write a custom log appender that modifies the logs before they
are sent.
Thanks.
Diana El-Masri 于2020年11月6日周五 上午7:47写道:
> Hi,
>
> No the logs of the sources connected to flink.
>
> Thanks
>
> Chesnay Schepler a écrit :
>
> > Are you referring to the log files of Flink?
> >
> > On 1
Kaibo Zhou created FLINK-18875:
--
Summary: DESCRIBE table can return the table properties
Key: FLINK-18875
URL: https://issues.apache.org/jira/browse/FLINK-18875
Project: Flink
Issue Type
d includes the
detailed position if an error occurs.
For the `insert target table`, the platform wants to validate the table
exists, field name and field type.
Best,
Kaibo
Danny Chan 于2019年12月30日周一 下午5:37写道:
> Hi, Kaibo Zhou ~
>
> There are several phrases that a SQL text get to execut
Hi,
As a platform user, I want to integrate Flink SQL with the platform. The usage
scenario is:users register table/udf to catalog service, and then write SQL
scripts like: "insert into xxx select from xxx" through Web SQLEditor, the
platform need to validate the SQL script after each time the use
Kaibo Zhou created FLINK-15419:
--
Summary: Validate SQL syntax not need to depend on connector jar
Key: FLINK-15419
URL: https://issues.apache.org/jira/browse/FLINK-15419
Project: Flink
Issue
Kaibo Zhou created FLINK-13787:
--
Summary: PrometheusPushGatewayReporter does not cleanup TM metrics
when run on kubernetes
Key: FLINK-13787
URL: https://issues.apache.org/jira/browse/FLINK-13787
Project
Thanks for bringing this up. Obviously, option 2 and 3 are both useful for
fink users on kubernetes. But option 3 is easy for users that not have many
concepts of kubernetes, they can start flink on kubernetes quickly, I think
it should have a higher priority.
I have worked some time to integrate
+1 for the FLIP!
More and more Flink users from China have recently requested JIRA
permissions.
The proposed translation specification will make it easier for them to
participate and improve the quality of the translation, and ensure the
consistency of style.
Best
Kaibo
Jark Wu 于2019年2月17日周日 下
Time zone is a very useful feature, I think there are three levels of time
zone settings (priority from high to low):
1. connectors: For example, the time zone of the time field in the kafaka
data
2. job level: Specifies which time zone the current job uses, perhaps
specified by TableConfig or Stre
Kaibo Zhou created FLINK-7209:
-
Summary: Support DataView in Java and Scala Tuples and case
classes or as the accumulator of AggregateFunction itself
Key: FLINK-7209
URL: https://issues.apache.org/jira/browse/FLINK
Kaibo Zhou created FLINK-7208:
-
Summary: Refactor build-in agg(MaxWithRetractAccumulator and
MinWithRetractAccumulator) using the DataView
Key: FLINK-7208
URL: https://issues.apache.org/jira/browse/FLINK-7208
Kaibo Zhou created FLINK-7207:
-
Summary: Support getAccumulatorType when use DataView
Key: FLINK-7207
URL: https://issues.apache.org/jira/browse/FLINK-7207
Project: Flink
Issue Type: Improvement
Kaibo Zhou created FLINK-7206:
-
Summary: Implementation of DataView to support state access for
UDAGG
Key: FLINK-7206
URL: https://issues.apache.org/jira/browse/FLINK-7206
Project: Flink
Issue
Kaibo Zhou created FLINK-6955:
-
Summary: Add operation log for Table
Key: FLINK-6955
URL: https://issues.apache.org/jira/browse/FLINK-6955
Project: Flink
Issue Type: Improvement
Kaibo Zhou created FLINK-6544:
-
Summary: Expose Backend State Interface for UDAGG
Key: FLINK-6544
URL: https://issues.apache.org/jira/browse/FLINK-6544
Project: Flink
Issue Type: Improvement
Kaibo Zhou created FLINK-6355:
-
Summary: TableEnvironment support register TableFunction
Key: FLINK-6355
URL: https://issues.apache.org/jira/browse/FLINK-6355
Project: Flink
Issue Type
16 matches
Mail list logo