I believe what you are looking for is the State TTL [1][2].
Thank you~
Xintong Song
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#state-time-to-live-ttl
[2]
https://ci.apache.org/projects/flink/flink-docs-stabledev/table/config.html#table-exec-state-tt
FYI, this was an EFS issue. I originally dismissed EFS being the issue because
the Percent I/O limit metric was very low. But I later noticed the throughput
utilization was very high. We increased the provisioned throughput and the
checkpoint times are greatly reduced.
From: Colletta, Edwar
Thanks Xintong.
I'll check it out and get back to you.
On Thu, Dec 24, 2020 at 1:30 PM Xintong Song wrote:
> I believe what you are looking for is the State TTL [1][2].
>
>
> Thank you~
>
> Xintong Song
>
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.ht
Hi,
I have a UDF which returns a type of MAP')>. When I try to register this type with Flink via the
CREATE TABLE DDL, I encounter an exception:
- SQL parse failed. Encountered "(" at line 2, column 256.
Was expecting one of:
"NOT" ...
"NULL" ...
">" ...
"MULTISET" ...
"ARRAY"
An expansion to my question:
What I really want is for the UDF to return `RAW(io.circe.Json, ?)` type,
but I have to do a conversion between Table and DataStream, and
TypeConversions.fromDataTypeToLegacyInfo cannot convert a plain RAW type
back to TypeInformation.
On Thu, Dec 24, 2020 at 12:59 PM
I am new to Flink and am trying to process a file and write it out
formatted as JSON.
This is a much simplified version.
public class AndroidReader {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecuti
是Flink1.12的,kafka消费端能读取到数据,但是下面的代码无法读取到数据,运行后没有报错也没有输出,求助,谢谢
import org.apache.flink.streaming.api.scala._
import org.apache.flink.table.api.{EnvironmentSettings, Table}
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.types.Row
import org.apache.flin
Hi Team,
I recently encountered one usecase in my project as described below:
My data source is HBaseWe receive huge volume of data at very high speed to
HBase tables from source system.Need to read from HBase, perform computation
and insert to postgreSQL.
I would like few inputs on the below poi
I can't seem to find the org.apache.flink.connector.file.sink.FileSink
class.
I can find the StreamingFileSink, but not FileSink referenced here:
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/connectors/file_sink.html
Am I missing a dependency?
compile group: 'org.apache.f
Of course I found it shortly after submitting my query.
compile group: 'org.apache.flink', name: 'flink-connector-files', version:
'1.12.0'
On 2020/12/24 15:57:20, Billy Bain wrote:
> I can't seem to find the org.apache.flink.connector.file.sink.FileSink
> class.
>
> I can find the Streaming
Thanks for reporting this! This is not the expected behaviour, I created
a Jira Issue: https://issues.apache.org/jira/browse/FLINK-20764.
Best,
Aljoscha
On 23.12.20 22:26, David Anderson wrote:
I did a little experiment, and I was able to reproduce this if I use the
sum aggregator on KeyedStre
Hi Billy,
StreamingFileSink does not expect the Encoder to close the stream passed in
in encode method. However, ObjectMapper would close it at the end of the write
method. Thus I think you think disable the close action for ObjectMapper, or
change the encode implementation to
objectMapper
Hi all,
I tested the previous PoC with the current tests and I found some new
issues that might cause divergence, and sorry for there might also be some
reversal for some previous problems:
1. Which operators should wait for one more checkpoint before close ?
One motiv
13 matches
Mail list logo