Thanks Martijn, the documentation for Async IO was also indicating the same
and that's what prompted me to post this question here.
~
Karthik
On Mon, Jun 12, 2023 at 7:45 PM Martijn Visser
wrote:
> Hi Karthik,
>
> In my opinion, it makes more sense to use a sink to leverage Scylla over
> using A
Hello,
Using Flink record stream format file source API as below for parquet records
reading.
FileSource.FileSourceBuilder source =
FileSource.forRecordStreamFormat(streamformat, path);
source.monitorContinuously(Duration.ofMillis(1));
Want to log/generate metrices for corrupt records and
Hi Yogesh,
I think you need to build the dynamic SQL statement in your service and
then submit the SQL to flink cluster.
Best,
Shammon FY
On Mon, Jun 12, 2023 at 9:15 PM Yogesh Rao wrote:
> Hi,
>
> Is there a way we can build a dynamic SQL in Flink from contents of Map ?
>
> Essentially trying
Hi,
I tried to install apache-flink python package but getting an error message,
and I am unsure how to resolve this, since I am able to install other packages
like pandas without any issues. Any help would be greatly appreciated.
vagrant@devbox:~$ pip install apache-flink
Defaulting to user in
Hi Karthik,
In my opinion, it makes more sense to use a sink to leverage Scylla over
using Async IO. The primary use case for Async IO is enrichment, not for
writing to a sync.
Best regards,
Martijn
On Mon, Jun 12, 2023 at 4:10 PM Karthik Deivasigamani
wrote:
> Thanks Martijn for your respons
Thanks Martijn for your response.
One thing I did not mention was that we are in the process of moving away
from Cassandra to Scylla and would like to use the Scylla Java Driver for
the following reason :
> The Scylla Java driver is shard aware and contains extensions for a
> tokenAwareHostPolicy.
Hi,
Is there a way we can build a dynamic SQL in Flink from contents of Map ?
Essentially trying to do achieve something like below
StringBuilder builder = new StringBuilder("INSERT INTO sampleSink SELECT ");
builder.append("getColumnsFromMap(dataMap), ");
builder.append(" FROM Data").toString
1. Like a regular Kafka client, so it depends on how you configure it.
2. Yes
3. It depends on the failure of course, but you could create something like
a Dead Letter Queue in case deserialization fails for your incoming message.
Best regards,
Martijn
On Sat, Jun 10, 2023 at 2:03 PM Anirban Dut
Hi!
I think you forgot to upgrade the operator CRD (which contains the updates
enum values).
Please see:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/upgrade/#1-upgrading-the-crd
Cheers
Gyula
On Mon, 12 Jun 2023 at 13:38, Liting Liu (litiliu)
wrote:
>
Hi, I was trying to submit a flink 1.17 job with the flink-kubernetes-operator
version v1.5.0.
But encountered the below exception:
The FlinkDeployment "test-scale-z6t4cd" is invalid: spec.flinkVersion:
Unsupported value: "v1_17": supported values: "v1_13", "v1_14", "v1_15", "v1_16"
I think
Hi,
Why wouldn't you just use the Flink Kafka connector and the Flink Cassandra
connector for your use case?
Best regards,
Martijn
On Mon, Jun 12, 2023 at 12:03 PM Karthik Deivasigamani
wrote:
> Hi,
>I have a use case where I need to read messages from a Kafka topic,
> parse it and write
Hi,
I have a use case where I need to read messages from a Kafka topic,
parse it and write it to a database (Cassandra). Since Cassandra supports
async APIs I was considering using Async IO operator for my writes. I do
not need exactly-once semantics for my use-case.
Is it okay to leverage the A
Please send an email to user-unsubscr...@flink.apache.org to unsubscribe
Best,
Hang
Yu voidy 于2023年6月12日周一 11:39写道:
>
>
13 matches
Mail list logo