Hi Yang,
The problem is re-occurred, full JM log is attached
Thanks,
Alexey
From: Yang Wang
Sent: Sunday, February 28, 2021 10:04 PM
To: Alexey Trenikhun
Cc: Flink User Mail List
Subject: Re: Kubernetes HA - attempting to restore from wrong (non-existing)
savep
omit it !!!
jiahong li 于2021年3月10日周三 上午10:27写道:
> hi,sorry to bother you.In spark 3.0.1,hive-1.2 is supported,but in spark
> 3.1.x maven profile hive-1.1 is removed.Is that means hive-1.2 does not
> supported in spark 3.1.x? how can i support hive-1.2 in spark 3.1.x,or any
> jira? can anyone he
Hi Flink Community,
After configuring the JDBC timeout time, I still could not get rid of the issue.
https://issues.apache.org/jira/browse/FLINK-21674
I created a JIRA task to describe the problem. Any suggestion is appreciated.
Best regards,
Fuyao
From: Fuyao Li
Date: Wednesday, March 3, 2021
hi,sorry to bother you.In spark 3.0.1,hive-1.2 is supported,but in spark
3.1.x maven profile hive-1.1 is removed.Is that means hive-1.2 does not
supported in spark 3.1.x? how can i support hive-1.2 in spark 3.1.x,or any
jira? can anyone help me ?
Hi Yun Tang,
Thanks for the information.
Best,
Yik San Chan
On Wed, Mar 10, 2021 at 1:07 AM Yun Tang wrote:
> Hi Yik,
>
> As far as I know, the source code of ververica connector is not public,
> and you could refer to [1] for open-source implementation.
>
>
> [1]
> https://github.com/apache/b
Hi all,
I'm trying to use the State Processor API to extract all keys from a
RocksDB savepoint produced by an operator in a Flink streaming job into CSV
files.
The problem is that the storage size of the savepoint is 30TB and I'm
running into garbage collection issues no matter how much memory in
Hi Yun,
It is confusing but UI now shows expected value "At Least Once" (obviously
checkpointCfg#checkpointingMode shows AT_LEAST_ONCE as well). Clearly I've
either looked in wrong place or job was not upgraded when I changed
checkpointing mode ...
Sorry for noise and thank you for your help
A
my stack overflow question.
https://stackoverflow.com/questions/66536868/flink-aws-s3-access-issue-intellij-idea?noredirect=1#comment117626682_66536868
On Tue, Mar 9, 2021 at 11:28 AM sri hari kali charan Tummala <
kali.tumm...@gmail.com> wrote:
> Here is my Intellij question.
>
>
> https://stac
Here is my Intellij question.
https://stackoverflow.com/questions/66536868/flink-aws-s3-access-issue-intellij-idea?noredirect=1#comment117626682_66536868
On Mon, Mar 8, 2021 at 11:22 AM sri hari kali charan Tummala <
kali.tumm...@gmail.com> wrote:
>
> Hi Flink Experts,
>>
>
> I am trying to read
Is there any reason not to have Nomad HA on the lines of K8s HA ? I think
it would depend on how puggable the HA core code is ? Any links to how
ZK/K8s code specifically for HA would be highly appreciated
Great, thank you so much!
On Tue, Mar 9, 2021 at 1:08 PM Till Rohrmann wrote:
> *This message originated outside your organization.*
>
> --
>
> Hi Bob,
>
> Thanks for reporting this issue. I believe that this has been an
> oversight. I have filed a JIRA issue for fixi
Hi Bob,
Thanks for reporting this issue. I believe that this has been an oversight.
I have filed a JIRA issue for fixing this problem [1].
[1] https://issues.apache.org/jira/browse/FLINK-21693
Cheers,
Till
On Mon, Mar 8, 2021 at 4:15 PM Bob Tiernay wrote:
> Hi all,
>
> I have been trying to t
Hi!
I'm working on a join setup that does fuzzy matching in case the client
does not send enough parameters to join by a foreign key. There's a few
ways I can store the state. I'm curious about best practices around this.
I'm using rocksdb as the state storage.
I was reading the code for Interv
Hi Abdullah,
The "Connection refused" exception should have no direct relationship with
checkpoint, I think you could check whether the socket source has worked well
in your job.
Best
Yun Tang
From: Abdullah bin Omar
Sent: Tuesday, March 9, 2021 0:13
To: user@f
Hi Yik,
As far as I know, the source code of ververica connector is not public, and you
could refer to [1] for open-source implementation.
[1] https://github.com/apache/bahir-flink/tree/master/flink-connector-redis
Best
Yun Tang
From: Yik San Chan
Sent: Tue
Hi Maciek,
Thank you for reaching out. I'll try to answer your questions separately.
- nothing comparable. You already mention the State Processor API. Besides
that, I can only think of a side channel (CoFunction) that is used to
request a certain state that is then send to a side output and ulti
Hi Dylan,
Unfortunately stop with savepoint is not supported with StateFun.
We will bump the priority of this issue and try to address it in the next
bugfix release.
Thanks,
Igal.
On Mon, Mar 8, 2021 at 9:08 PM Meissner, Dylan <
dylan.t.meiss...@nordstrom.com> wrote:
> Thank you for this inform
Great thanks, I was able to work around the issue by implementing my own
KafkaRecordDeserializer. I will take a stab at a PR to fix the bug, should
be an easy fix.
On Tue, Mar 9, 2021 at 9:26 AM Till Rohrmann wrote:
> Hi Bobby,
>
> This is most likely a bug in Flink. Thanks a lot for reporting t
Hi Bobby,
This is most likely a bug in Flink. Thanks a lot for reporting the issue
and analyzing it. I have created an issue for tracking it [1].
cc Becket.
[1] https://issues.apache.org/jira/browse/FLINK-21691
Cheers,
Till
On Mon, Mar 8, 2021 at 3:35 PM Bobby Richard
wrote:
> I'm receiving
Hi,
After implementing SourceFunction, you can use it to create a DataStream
using env.addSource() in your main method.
For example, if you have your custom source class with the name CustomSource
that implements SourceFunction, then it can be used for getting
input data and the if-statement afte
Hi Smile,
Thank you for your reply.
I read [1] according to the last email. I will have to add implements
SourceFunction CheckpointedFunction with the main class. Then
calling run() and cancel() inside the main class. Is it correct?
I just run the sample code from apache flink. I can not under
Hi, all
Thanks all for your suggestions and feedback.
I think it is a good idea that we increase the default size of the
separated pool by testing. I am fine with adding the suffix(".size") to the
config name, which makes it more clear to the user.
But I am a little worried about adding a prefix("
22 matches
Mail list logo