Hi,
scan.partition.num (the number of partitions [1]) translates into
parallel queries to the database (with different to/from). Batch size
is further calculated from lower and upper bounds and the number of
partitions.
scan.fetch-size hints JDBC driver to adjust the fetch size (see [2]).
The fi
Hi Jing,
Thank you for your suggestion. I will check if SSL parameters in URL works.
Thanks,
Qihua
On Sat, Oct 23, 2021 at 8:37 PM JING ZHANG wrote:
> Hi Qihua,
> I checked user documents of several database vendors(postgres, oracle,
> solidDB,SQL server)[1][2][3][4][5], and studied how to us
Hi Qihua,
I checked user documents of several database vendors(postgres, oracle,
solidDB,SQL server)[1][2][3][4][5], and studied how to use JDBC Driver with
SSL to connect to these databases.
Most of database vendors supports two ways:
1. Option1: Use Connection url
2. Option2: Define in Propertie
Hi Polarisary:
Maybe I see what you mean. You want to use the upsert mode for an append
stream without keyFields.
In fact, both isAppend and keyFields are set automatically by the planner
framework. You can't control them.
So yes, it is related to sql, only upsert stream can be inserted into sink
A typical use case that will genreate updates (meaning not append only) is
a non-widown groupy-by aggregation, like "select user, count(url) from
clicks group by user".
You can refer to the flink doc at
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/streaming/dynamic_tables.
My sql is regular insert like “insert into sink_table select c1,c2,c3 from
source_table”,
I want to know which case it will judge to append only? Does it has doc for
this?
Many thanks!
> 在 2019年11月14日,上午10:05,张万新 写道:
>
> Yes it's related to your sql, flink checks the plan of your sql to
Yes it's related to your sql, flink checks the plan of your sql to judge
whether your job is append only or has updates. If your job is append only,
that means no result need to be updated.
If you still have problems, please post your sql and complete error message
to help people understand your u
>
>>
>> *From:* Papadopoulos, Konstantinos
>>
>> *Sent:* Δευτέρα, 15 Απριλίου 2019 12:30 μμ
>> *To:* Fabian Hueske
>> *Cc:* Rong Rong ; user
>> *Subject:* RE: Flink JDBC: Disable auto-commit mode
>>
>>
>>
>> Hi Fabian,
>>
se/FLINK-12198
>
>
>
> Best,
>
> Konstantinos
>
>
>
> *From:* Papadopoulos, Konstantinos
>
> *Sent:* Δευτέρα, 15 Απριλίου 2019 12:30 μμ
> *To:* Fabian Hueske
> *Cc:* Rong Rong ; user
> *Subject:* RE: Flink JDBC: Disable auto-commit mode
>
>
>
> Hi F
Hi Fabian,
I opened the following issue to track the improvement proposed:
https://issues.apache.org/jira/browse/FLINK-12198
Best,
Konstantinos
From: Papadopoulos, Konstantinos
Sent: Δευτέρα, 15 Απριλίου 2019 12:30 μμ
To: Fabian Hueske
Cc: Rong Rong ; user
Subject: RE: Flink JDBC: Disable
Hi Fabian,
Glad to hear that you agree for such an improvement. Of course, I can handle it.
Best,
Konstantinos
From: Fabian Hueske
Sent: Δευτέρα, 15 Απριλίου 2019 11:56 πμ
To: Papadopoulos, Konstantinos
Cc: Rong Rong ; user
Subject: Re: Flink JDBC: Disable auto-commit mode
Hi Konstantinos
tch) to achieve our purpose.
>
>
>
> Thanks,
>
> Konstantinos
>
>
>
> *From:* Rong Rong
> *Sent:* Παρασκευή, 12 Απριλίου 2019 6:50 μμ
> *To:* Papadopoulos, Konstantinos
>
> *Cc:* user
> *Subject:* Re: Flink JDBC: Disable auto-commit mode
>
>
>
> Hi Kons
To: Papadopoulos, Konstantinos
Cc: user
Subject: Re: Flink JDBC: Disable auto-commit mode
Hi Konstantinos,
Seems like setting for auto commit is not directly possible in the current
JDBCInputFormatBuilder.
However there's a way to specify the fetch size [1] for your DB round-trip,
do
Hi Konstantinos,
Seems like setting for auto commit is not directly possible in the current
JDBCInputFormatBuilder.
However there's a way to specify the fetch size [1] for your DB round-trip,
doesn't that resolve your issue?
Similarly in JDBCOutputFormat, a batching mode was also used to stash
up
Then common way is to read in the cdc .writing generic operator wont be
easy .
On Wed, Jan 23, 2019 at 12:45 PM Manjusha Vuyyuru
wrote:
> But 'JDBCInputFormat' will exit once its done reading all data.I need
> something like which keeps polling to mysql and fetch if there are any
> updates or ch
I think this is very hard to build in a generic way.
The common approach here would be to get access to the changelog stream of
the table, writing it to a message queue / event log (like Kafka, Pulsar,
Kinesis, ...) and ingesting the changes from the event log into a Flink
application.
You can of
But 'JDBCInputFormat' will exit once its done reading all data.I need
something like which keeps polling to mysql and fetch if there are any
updates or changes.
Thanks,
manju
On Wed, Jan 23, 2019 at 7:10 AM Zhenghua Gao wrote:
> Actually flink-connectors/flink-jdbc module provided a JDBCInputFo
Actually flink-connectors/flink-jdbc module provided a JDBCInputFormat to
read data from a database.
u can have a try.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
HI,
Yes i have written custom jdbc sink function based on the jdbcoutformat
for streaming and its working and writing records in postgres db or H2
in memory db. However trying to figure out how many times open method is
called and establishes database connection because for my integration
tes
Hi,
I should also mention that the JdbcOutputFormat batches writes to the
database. Since it is not integrated with the Flink's checkpointing
mechanism, data might get lost in case of a failure.
I would recommend to implement a JdbcSinkFunction based on the code of the
JdbcOutputFormat.
If you use
Thanks for the info, At the moment i used the flink-jdbc to write the
streaming data coming from kafka which i can process and write those
data in postgres or mysql database configured on cluster or sandbox,
However when trying to write integration tests i am using in memory H2
database which s
The JdbcOutputFormat was originally meant for batch jobs.
It should be possible to use it for streaming jobs as well, however, you
should be aware that it is not integrated with Flink checkpointing
mechanism.
So, you might have duplicate data in case of failures.
I also don't know if or how well i
Yes i have been following the tutorials and reading from H2 and writing
to H2 works fine, But problem here is data coming from kafka and writing
them to h2 engine does not seems to work and cant see any error thrown
while writing into in memory H2 database, So couldnt say whats the error
and w
See the tutorial at the beginning of:
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java
Looks like plugging in "org.h2.Driver" should do.
On Wed, Feb 15, 2017 at 4:59 PM, Punit Tandel
wrote:
> Hi All
>
> Does flink jdbc support writing the data in
Thanks Chesnay for update.
On Tue, Sep 13, 2016 at 12:13 AM, Chesnay Schepler
wrote:
> Hello,
>
> the JDBC Sink completely ignores the taskNumber and parallelism.
>
> Regards,
> Chesnay
>
>
> On 12.09.2016 08:41, Swapnil Chougule wrote:
>
> Hi Team,
>
> I want to know how tasknumber & numtasks h
Hello,
the JDBC Sink completely ignores the taskNumber and parallelism.
Regards,
Chesnay
On 12.09.2016 08:41, Swapnil Chougule wrote:
Hi Team,
I want to know how tasknumber & numtasks help in opening db connection
in Flink JDBC JDBCOutputFormat Open. I checked with docs where it says:
26 matches
Mail list logo