On Mon, Jun 24, 2024 at 10:02 AM elakiya udhayanan
wrote:
> Hi Alexis and Gabor ,
>
> Thanks for your valuable response and suggestions. Will try to work on the
> suggestions and get back to you if require more details.
>
> Thanks,
> Elakiya
>
> On Sun, Jun 23, 202
we can discuss this further based on that...
>
> BR,
> G
>
>
> On Fri, Jun 21, 2024 at 7:16 PM elakiya udhayanan
> wrote:
>
>> Hi Team,
>>
>> I would like to remind about the request for the help required to fix the
>> vulnerabilities seen in the Flink D
Hi Team,
I would like to remind about the request for the help required to fix the
vulnerabilities seen in the Flink Docker image. Any help is appreciated.
Thanks in advance.
Thanks,
Elakiya U
On Tue, Jun 18, 2024 at 12:51 PM elakiya udhayanan
wrote:
> Hi Community,
>
> In o
Hi Community,
In one of our applications we are using a Fink Docker image and running
Flink as a Kubernetes pod. As per policy, we tried scanning the Docker
image for security vulnerabilities using JFrog XRay and we find that there
are multiple critical vulnerabilities being reported as seen in th
Hi Team,
I would like to know the possibilities of configuring the new relic alerts
for a Flink job whenever the job is submitted, gets failed and recovers
from the failure.
In our case, we have configured the Flink environment as a Kubernetes pod
running on an EKS cluster and the application code
; String relateQuery = "insert into xxx select correlator_id , name,
> relationship from Correlation; ;
>
>
> Best,
> Yu Chen
>
>
> 获取 Outlook for iOS <https://aka.ms/o0ukef>
> --
> *发件人:* Zhanghao Chen
> *发送时间:* Wednes
iya.
> Are you following the example here[1]? Could you attach a minimal,
> reproducible SQL?
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/insert/
>
>
>
> --
> Best!
> Xuyang
>
>
> At 2023-12-06 17:49:17, "
Hi Team,
I would like to know the possibility of having two sinks in a single Flink
job. In my case I am using the Flink SQL based job where I try to consume
from two different Kafka topics using the create table (as below) DDL and
then use a join condition to correlate them and at present write i
the 'employee' to
> a single sink table and use it directly.
>
> BTW, why you need the semantics about the pk?
>
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/data_stream_api/
>
>
>
> --
> Best!
> Xuyang
>
>
issues/FLINK-33400?filter=allopenissues
>
> --
> Best!
> Xuyang
>
>
> At 2023-10-30 16:42:03, "elakiya udhayanan" wrote:
>
> Hi team,
>
> I have a Kafka topic named employee which uses confluent avro schema and
> will emit the payload as below
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and
will emit the payload as below:
{
"employee": {
"id": "123456",
"name": "sampleName"
}
}
I am using the upsert-kafka connector to consume the events from the above
Kafka topic as below using the Flink SQL DDL statem
:
> https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/hive-compatibility/hive-dialect/queries/cte/
>
> On Thu, 19 Oct 2023 at 08:05, elakiya udhayanan
> wrote:
>
>> Hi Team,
>>
>> I have a Flink job which uses the upsert-kafka connector to consume the
Hi Team,
I have a Flink job which uses the upsert-kafka connector to consume the
events from two different Kafka topics (confluent avro serialized) and
write them to two different tables (in Flink's memory using the Flink's SQL
DDL statements).
I want to correlate them using the SQL join statemen
t;3. Or you could check the checkpoint part in the FLINK UI, it will
>show the detailed info about restoring checkpoint (ID and Path).
>
>
> On Thu, Sep 28, 2023 at 9:21 PM elakiya udhayanan
> wrote:
>
>> Hi Feng,
>>
>> Thanks for your response.
>>
&g
ink job for recovery.
> This is necessary to continue consuming from the historical offset
> correctly.
>
>
> Best,
> Feng
>
>
> On Thu, Sep 28, 2023 at 4:41 PM elakiya udhayanan
> wrote:
>
>> Hi team,
>>
>> I have a Kafka topic named employee w
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and
will emit the payload as below:
{
"id": "emp_123456",
"employee": {
"id": "123456",
"name": "sampleName"
}
}
I am using the upsert-kafka connector to consume the events from the above
Kafka topic as below using the F
he client can not access the right network to submit you
> job, maybe the address option in k8s is wrong and you can check the error
> message in k8s log
>
> Best,
> Shammon FY
>
> On Fri, Aug 11, 2023 at 11:40 PM elakiya udhayanan
> wrote:
>
>>
>> Hi Team,
>
Hi Team,
We are using Apache Flink 1.16.1 configured as a standalone Kubernetes pod
,for one of our applications to read from confluent Kafka topics to do
event correlation. We are using the flink's Table API join for the same (in
SQL format).We are able to submit the job using the flink's UI. For
Hi Team,
I am using the upsert-kafka table API connector of Apache Flink to consume
events from a kafka topic, I want to log the kafka payloads that are
consumed. Is there a way to log it?
My code looks as below:
EnvironmentSettings settings =
EnvironmentSettings.newInstance().inStreamingMode().
G PRIMARY KEY NOT ENFORCED,
> name STRING
> ) WITH (
> ...
> )
>
> Best regards,
> Jane
>
> On Mon, Jul 10, 2023 at 7:32 PM elakiya udhayanan
> wrote:
>
>> Hi Hang,
>> Once again thanks for your response, but I think you have misunderstood
>> my qu
ction, which will be used to get fields from Kafka message key.
>
> Best,
> Hang
>
> elakiya udhayanan 于2023年7月10日周一 16:41写道:
>
>> Hi Hang,
>>
>> The select query works fine absolutely, we have also implemented join
>> queries which also works without any i
*
> from KafkaTable`. Then I think there will be some error or the `user_id`
> will not be read correctly.
>
> Best,
> Hang
>
> elakiya udhayanan 于2023年7月10日周一 16:25写道:
>
>> Hi Hang Ruan,
>>
>> Thanks for your response. But in the documentation, they have an
Thanks,
Elakiya
On Mon, Jul 10, 2023 at 1:09 PM Hang Ruan wrote:
> Hi, elakiya.
>
> The upsert-kafka connector will read the primary keys from the Kafka
> message keys. We cannot define the fields in the Kafka message values as
> the primary key.
>
> Best,
> Hang
>
&g
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and
will emit the payload as below:
{
"employee": {
"id": "123456",
"name": "sampleName"
}
}
I am using the upsert-kafka connector to consume the events from the above
Kafka topic as below using the Flink SQL DDL statem
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and
will emit the payload as below:
{
"employee": {
"id": "123456",
"name": "sampleName"
}
}
I am using the upsert-kafka connector to consume the events from the above
Kafka topic as below using the Flink SQL DDL statem
Hi Team,
I have a requirement to read kafka events through Apache Flink and do
processing with the same.
Now this kafka topic which produces the event to the Apache Flink is a
confluent kafka and it is hosted as a kubernetes pod in the Docker
container.
The actual problem is I am unable to consu
26 matches
Mail list logo