Hi Flavio,
You can use `JDBCTableSource`, and register it from
TableEnvionment.registerTableSource, you need provide
a OracleDialect, maybe just implement `canHandle` and
`defaultDriverName` is OK.
Best,
Jingsong Lee
On Sun, Feb 2, 2020 at 2:42 PM Jark Wu wrote:
> Hi Flavio,
>
> If you want
Hi Flavio,
If you want to adjust the writing statement for Oracle, you can implement
the JDBCDialect for Oracle, and pass to the JDBCUpsertTableSink when
constructing via `JDBCOptions.Builder#setDialect`. In this way, you don't
need to recompile the source code of flink-jdbc.
Best,
Jark
On Fri,
Hi Kant,
Benchao explained the reason in depth, thanks Benchao!
In a word, the results are as expected. That's because all the streaming
operators are in per-record manner and eventually consistent.
That means user may see some instantaneous intermediate values. The outer
join will generate addit
Hi kant,
Thanks for reporting the issue, I'd like to give some thoughts here after
digging into the source code[1] in blink planner, logic is same with legacy
planner[2].
The main logic of FULL OUTER JOIN is:
if input record is accumulate
| if input side is outer
| | if there is no matched ro
Wondering if anyone had a chance to look through this or should I create
the JIRA?
On Wed, Jan 29, 2020 at 6:49 AM Till Rohrmann wrote:
> Hi Kant,
>
> I am not an expert on Flink's SQL implementation. Hence, I'm pulling in
> Timo and Jark who might help you with your question.
>
> Cheers,
> Ti
Hi Till & others,
We enabled setFailOnCheckpointingErrors
(setTolerableCheckpointFailureNumber isn't available in 1.8) and this
indeed prevents the large number of restarts.
Hopefully a solution for the reported issue[1] with google gets found but
for now this solved our immediate problem.
Thank
Did you put each inside a different folder with their name? Like
/opt/flink/plugins/s3-fs-presto/flink-s3-fs-presto-1.9.1.jar ?
check
https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html
On Sat, Feb 1, 2020, 07:42 Navneeth Krishnan
wrote:
> Hi Arvid,
>
> Thanks for the
Hi,
I think there could be two workaround ways to 'disableGenericType' in case
of KafkaSource :
1. adding the TypeInfo annotation [1] to the KafaTopicPartition.
2. using the reflection to call the private method. :)
Maybe we could add this TypeInfo annotation to the KafakaConnector.
[1]
https://c