Hi Timo,
many thanks for your feedback and looking forward to hearing from users
while using SANSA.
Sounds good, I will send it to the community mailing list as well, thanks a
lot for letting us know.
Best regards,
On Fri, Dec 14, 2018 at 1:22 PM Timo Walther wrote:
> Hi,
>
> looks like a v
Hi Nastaran
Thank you for your reply!
I got a concept of the savepoint.
It looks so nice.
Regards,
Yuta
On 2018/12/18 14:41, Nastaran Motavali wrote:
Hi Yuta,
You can use cancel-with-savepoint to stop you application and save the
state in a savepoint, then update your jar and restart the app
Hi Yuta,
You can use cancel-with-savepoint to stop you application and save the state in
a savepoint, then update your jar and restart the application from the saved
savepoint. Checkpointing is an automatic mechanism to recover from runtime
failures and savepoints are designed for manual restar
Hi,
I need to know the way to implement custom metrics in my flink program.
Currently, I know we can create custom metrics with the help of
RuntimeContext.
But in my aggregate() I do not have RuntimeContext. I am using window
operator and applying aggregate() method on it. And I am passing
Aggrega
Hi Vino,
I am running everything (and generating the exception) locally I believe,
as the tutorial instructs.
Thanks,
Brett
On Mon, Dec 17, 2018 at 6:12 PM vino yang wrote:
> Hi Brett,
>
> Is your exception generated on your local machine or generated on a remote
> node?
>
> Best,
> Vino
>
> B
Hi all
Now I'm trying to update my streaming application.
But I have no idea how to update it gracefully.
Should I stop it, replace a jar file then restart it?
In my understanding, in that case, all the state will be recovered if I
use checkpoints in a persistent storage.
Is this correct?
Tha
Hi,
We have implemented ANALYZE TABLE in our internal version of Flink, and we
will try to contribute back to the community.
Best,
Kurt
On Thu, Nov 29, 2018 at 9:23 PM Fabian Hueske wrote:
> I'd try to tune it in a single query.
> If that does not work, go for as few queries as possible, spli
Hi Brett,
Is your exception generated on your local machine or generated on a remote
node?
Best,
Vino
Brett Marcott 于2018年12月15日周六 下午5:38写道:
> Hi Flink users,
>
> I am attempting to follow the tutorial here:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/tutorials/datastream_a
Oh this is timely!
I hope I can save you some pain Kostas! (cc-ing to flink dev to get
feedback there for what I believe to be a confirmed bug)
I was just about to open up a flink issue for this after digging (really)
deep and figuring out the issue over the weekend.
The problem arises due the
Hi,
Thx for your reply and pointers on the currentLowWatermark. Looks like the
Flink UI has tab for Watermarks itself for an Operator.
I dump 5 records into the Kinesis Data Stream and am trying to read the
same record from the FlinkKinesisConsumer and am not able to.
I am using the same monitorin
Hi,
In a Kubernetes delpoyment, im not able to display metrics in the dashboard, I
try to expose and fix the metrics.internal.query-service.port variable
But nothing. Do you have any ideas?
Thx
Eric
Hi Jiayi,
There is a ticket[1] for supporting dynamic patterns which I would say
is super set of what you are actually suggesting.
[1]https://issues.apache.org/jira/browse/FLINK-7129
On 17/12/2018 03:17, fudian.fd wrote:
> Hi Jiayi,
>
> As far as I know, there is no plan to support this feature.
12 matches
Mail list logo