t 2:24 PM Till Rohrmann wrote:
> I am not 100% sure but maybe (_, _) => {} captures a reference to object
> TestSink which is not serializable. Maybe try to simply define a no
> op JdbcStatementBuilder and pass such an instance to JdbcSink.sink().
>
> Cheers,
> Till
>
>
as part of its closure.
>
> Cheers,
> Till
>
> On Tue, Feb 16, 2021 at 8:58 PM Clay Teeter
> wrote:
>
>> Thanks Till, the tickets and links were immensely useful. With that i
>> was able to make progress and even get things to compile. However, when
s to use
> Flink's DDL [3]. Unfortunately, I couldn't find an easy example on how to
> use the DDL. Maybe Timo or Jark can point you towards a good guide on how
> to register your jdbc table sink.
>
> [1] https://issues.apache.org/jira/browse/FLINK-17748
> [2] https://issues
Hey all. Hopefully this is an easy question. I'm porting my JDBC postgres
sink from 1.10 to 1.12
I'm using:
* StreamTableEnvironment
* JdbcUpsertTableSink
What I'm having difficulty with is how to register the sink with the
streaming table environment.
In 1.10:
tableEnv.registerTableSink(
ll be added to this value for -Xmx.
>
>
> Thank you~
>
> Xintong Song
>
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_detail.html#jvm-parameters
>
> On Fri, Jun 12, 2020 at 4:50 PM Clay Teeter
> wrote:
>
>> Thank
Flink task manager so that memory will be managed accordingly.
>
> Flink task manager expects all the memory configurations are already set
> (thus network min/max should have the same value) before it's started. In
> your case, it seems such configurations are missing. Same for th
Hi flink fans,
I'm hoping for an easy solution. I'm trying to upgrade my 9.3 cluster to
flink 10.1, but i'm running into memory configuration errors.
Such as:
*Caused by: org.apache.flink.configuration.IllegalConfigurationException:
The network memory min (64 mb) and max (1 gb) mismatch, the net
Hey, does anyone have any examples that i can use to create a
LookupableTableSource from a kafka topic?
Thanks!
Clay
I looked into the disk issues and found that Fabian was on the right path.
The checkpoints that were lingering were in-fact in use.
Thanks for the help!
Clay
On Thu, Sep 26, 2019 at 8:09 PM Clay Teeter wrote:
> I see, I'll try turning off incremental checkpoints to see if that helps
What is the best way to run unit tests on streams that contain
ProcessTimeWindows?
Example:
def bufferDataStreamByProcessId(ds: DataStream[MaalkaRecord]):
DataStream[MaalkaRecord] = {
ds.map { r =>
println(s"data in: $r") // Data shows up here
r
}.keyBy { mr =>
val r = mr.asInstan
e them once they are not needed anymore.
>
> Are you sure that the size of your application's state is not growing too
> large?
>
> Best, Fabian
>
> Am Di., 24. Sept. 2019 um 10:47 Uhr schrieb Clay Teeter <
> clay.tee...@maalka.com>:
>
>> Oh geez, check
e HA storage and checkpoint directory left after shutting
> down cluster?
>
> Thanks,
> Biao /'bɪ.aʊ/
>
>
>
> On Tue, 24 Sep 2019 at 03:12, Clay Teeter wrote:
>
>> I'm trying to get my standalone cluster to remove stale checkmarks.
>>
>>
I'm trying to get my standalone cluster to remove stale checkmarks.
The cluster is composed of a single job and task manager backed by rocksdb
with high availability.
The configuration on both the job and task manager are:
state.backend: rocksdb
state.checkpoints.dir: file:///opt/ha/49/checkpoin
13 matches
Mail list logo