Hi Team,
I am using "AsyncDataStream.unorderedWait" to connect to cassandra . The
cassandra lookup operators are becoming the busy operator and creating
back-pressure result low throughput.
The Cassandra lookup is a very simple query. So I increased the capacity
parameter to 80 from 15 and cou
Hi,
I think I already got the answer now.
We have this issue that was found a risk where anyone can upload a jar via
rest endpoint.
I am reading this part of the document.
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/deployment/security/security-ssl/
So the rest endpoint doe
I found out someone else reported this and found a workaround:
https://issues.apache.org/jira/browse/FLINK-32241
Am Mo., 10. Juli 2023 um 16:45 Uhr schrieb Alexis Sarda-Espinosa <
sarda.espin...@gmail.com>:
> Hi again,
>
> I have found out that this issue occurred in 3 different clusters, and 2
>
Hi again,
I have found out that this issue occurred in 3 different clusters, and 2 of
them could not recover after restarting pods, it seems state was completely
corrupted afterwards and was thus lost. I had never seen this before
1.17.1, so it might be a newly introduced problem.
Regards,
Alexis
Hello,
I have defined a pattern (a->b->c->d->e->f->g) that should match
within 10 minutes. However, when I check the task in the Flink WebUI, I notice
that the CEP operator quickly reaches a busy state with 100% utilization.
Increasing the concurrency of CEP operator does not solve the
Hi Hang,
Once again thanks for your response, but I think you have misunderstood my
question. At present we are only using the DDL format of Table API and the
only issue we face is , we are unable to set the primary key field for the
Flink table since the value we want to use as primary key is pre
At this time, no.
On 08/07/2023 04:00, Prasanna kumar wrote:
Hi all,
Java 21 plans to support light weight thread called fiber based on
Project LOOM which will increase the concurrency to great extent.
Is there any plan for flink to leverage it?
Thanks,
Prasanna.
Hello,
we have just experienced a weird issue in one of our Flink clusters which
might be difficult to reproduce, but I figured I would document it in case
some of you know what could have gone wrong. This cluster had been running
with Flink 1.16.1 for a long time and was recently updated to 1.17.
Hi, Elakiya.
If everything is right for the KafkaTable, I think there must be a
`user_id` field in the Kafka message key.
We could see the code in the method `createKeyValueProjections` of
`UpsertKafkaDynamicTableFactory` as follows.
```
private Tuple2
createKeyValueProjections(ResolvedCatalo
Hi Hang,
The select query works fine absolutely, we have also implemented join
queries which also works without any issues.
Thanks,
Elakiya
On Mon, Jul 10, 2023 at 2:03 PM Hang Ruan wrote:
> Hi, Elakiya.
>
> Maybe this DDL could be executed. Please execute the select sql `select *
> from Kafk
Hi, Elakiya.
Maybe this DDL could be executed. Please execute the select sql `select *
from KafkaTable`. Then I think there will be some error or the `user_id`
will not be read correctly.
Best,
Hang
elakiya udhayanan 于2023年7月10日周一 16:25写道:
> Hi Hang Ruan,
>
> Thanks for your response. But in t
Hi Hang Ruan,
Thanks for your response. But in the documentation, they have an example of
defining the Primary Key for the DDL statement (code below). In that case
we should be able to define the primary key for the DDL rite. We have
defined the primary key in our earlier use cases when it wasn't
Thank you Hang for your answer.
Regarding your proposal 2, implementing such logic will prevent
parallelizing on TM, since from the 1st ID, I will fetch n IDs, but with
this approach, all IDs will finally be managed by the same TM.
However, I am not totally satisfied with the 1st choice which is t
Hi, elakiya.
The upsert-kafka connector will read the primary keys from the Kafka
message keys. We cannot define the fields in the Kafka message values as
the primary key.
Best,
Hang
elakiya udhayanan 于2023年7月10日周一 13:49写道:
> Hi team,
>
> I have a Kafka topic named employee which uses confluen
Hi, Benoit.
A split enumerator responsible for discovering the source splits, and
assigning them to the reader. It seems like that your connector discovering
splits in TM and assigning them in JM.
I think there are 2 choices:
1. If you need the enumerator to assign splits, you have to send the ev
15 matches
Mail list logo