Hi Jacob.

Based on your latest error message, it seems you're using an outdated
parameter 'connector.type' = 'jdbc'. You should use 'connector' = 'jdbc'
instead.
Please refer to the following documentation[1]

[1].
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/table/jdbc/


Best,
Feng


On Fri, Nov 8, 2024 at 3:16 PM Schwalbe Matthias <
matthias.schwa...@viseca.ch> wrote:

> Hi Jacob,
>
>
>
> It’s a little bit of guesswork …
>
>
>
> The disappearing records remind me a bit of a peculiarity of Oracle, that
> each (e.g. INSERT) statement is in an implicit transaction and hence needs
> to be committed.
>
> In Flink committing transaction happen together with the checkpoint cycle,
> i.e. this needs to be setup properly for your job.
>
> I work mostly with streaming API not table API, but I guess there the
> matter is just the same.
>
>
>
> For the database in use (instead of ‘default’), I think you can specify
> this in your JDBC connection string.
>
>
>
> Hope that helps 😊
>
>
>
> Thias
>
>
>
>
>
>
>
> *From:* Jacob Rollings <jacobrolling...@gmail.com>
> *Sent:* Friday, November 8, 2024 7:39 AM
> *To:* user@flink.apache.org
> *Subject:* [External] Re: Flink table materialization
>
>
>
> ⚠*EXTERNAL MESSAGE – **CAUTION: Think Before You Click *⚠
>
>
>
> After correcting the properties of the connector, now iam getting the error
>
>  in the screenshot
>
>
>
> I have also attached the jars in the class path while launching flink sql
> cli.
>
>
>
> Full description of the usecase is in the first email in this loop.
>
>
>
> On Thu, Nov 7, 2024, 11:38 PM Jacob Rollings <jacobrolling...@gmail.com>
> wrote:
>
> Added attachment of the error message.
>
>
>
> On Thu, Nov 7, 2024, 11:12 PM Jacob Rollings <jacobrolling...@gmail.com>
> wrote:
>
> Hi,
>
>
>
> I want to make the tables created by Flink Table API/SQL durable and
> permanent. To achieve this, I am trying the following basic example using
> the JDBC Oracle connector. I have added both the Flink JDBC and Oracle JDBC
> drivers to the Flink lib directory. I am using the Flink SQL client to run
> the queries. While it successfully creates tables in the default-database,
> I don't see the actual tables in the Oracle database. Moreover, the tables
> created under the Flink default-database seem to exist only as long as the
> session is active.
>
>
>
> What steps should I take to ensure that the in-memory tables I work with
> during my Flink job are permanently stored in the database?
>
>
>
> The documentation mentions using Catalogs and Connectors to persist
> in-memory tables to a database. I want to store my tables and data in
> Oracle DB. However, I noticed that Flink supports only Hive, PostgreSQL,
> and MySQL catalogs. Does this mean my data will reside in Oracle while the
> metadata about the tables can be stored only in a Hive, PostgreSQL, or
> MySQL metastore?
>
>
>
> The documentation on this topic seems to cover only basic concepts and
> lacks complete examples on how to achieve this. Any pointers or detailed
> guidance would be greatly appreciated.
>
>
>
> Thanks.
>
> Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und
> beinhaltet unter Umständen vertrauliche Mitteilungen. Da die
> Vertraulichkeit von e-Mail-Nachrichten nicht gewährleistet werden kann,
> übernehmen wir keine Haftung für die Gewährung der Vertraulichkeit und
> Unversehrtheit dieser Mitteilung. Bei irrtümlicher Zustellung bitten wir
> Sie um Benachrichtigung per e-Mail und um Löschung dieser Nachricht sowie
> eventueller Anhänge. Jegliche unberechtigte Verwendung oder Verbreitung
> dieser Informationen ist streng verboten.
>
> This message is intended only for the named recipient and may contain
> confidential or privileged information. As the confidentiality of email
> communication cannot be guaranteed, we do not accept any responsibility for
> the confidentiality and the intactness of this message. If you have
> received it in error, please advise the sender by return e-mail and delete
> this message and any attachments. Any unauthorised use or dissemination of
> this information is strictly prohibited.
>

Reply via email to