Thanks for the pointer .. I will try debugging. I am getting this exception
running my application on Kubernetes using the Flink operator from Lyft.
regards.
On Sat, 21 Sep 2019 at 6:44 AM, Dian Fu wrote:
> This exception is used internally to get the plan of a job before
> submitting it for ex
This exception is used internally to get the plan of a job before submitting it
for execution. It's thrown with special purpose and will be caught internally
in [1] and will not be thrown to end users usually.
You could check the following places to find out the cause to this problem:
1. Check
Hello!
I am having some difficulty with multiple job managers in an HA setup using
Flink 1.9.0.
I have 2 job managers and have setup the HA setup with the following config
high-availability: zookeeper
high-availability.cluster-id: /imet-enhance
high-availability.storageDir: hdfs:///flink/ha/
hig
Hi -
When you get an exception stack trace like this ..
Caused by:
org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException
at
org.apache.flink.streaming.api.environment.StreamPlanEnvironment.execute(StreamPlanEnvironment.java:66)
at
org.apache.flink.streaming.api.environmen
Thank you for the clarification.
On Fri, Sep 20, 2019 at 6:59 AM Fabian Hueske wrote:
> Hi,
>
> It depends.
>
> There are many things that can be changed. A savepoint in Flink contains
> only the state of the application and not the configuration of the system.
> So an application can be migrate
Hi all,
Happy Friday!
As a kind reminder, the meetup is ON next Tuesday at Yelp HQ in San
Francisco. See you all there at 6:30pm.
Regards,
Xuefu
On Fri, Aug 30, 2019 at 11:44 AM Xuefu Zhang wrote:
> Hi all,
>
> As promised, we planned to have quarterly Flink meetup and now it's about
> the ti
Use case is similar. But not able to check heap space issue, as data size
is small. Thought of mean while checking with you.
Thanks for looking into it. Really appreciate it.
I have marked the usage of temporal tables in bold red for ease of
reference.
On Fri, Sep 20, 2019, 8:18 PM Fabian Hueske
Hi,
This looks OK on the first sight.
Is it doing what you expect?
Fabian
Am Fr., 20. Sept. 2019 um 16:29 Uhr schrieb Nishant Gupta <
nishantgupta1...@gmail.com>:
> Hi Fabian,
>
> Thanks for the information.
> I have been reading about it and doing the same as a part of flink job
> written in J
Hi??Fabian ??
Thank you very much for your suggestion. This is when I use flink sql to write
data to hdfs at work. I feel that it is inconvenient. I wrote this function,
and then I want to contribute it to the community. This is my first PR , some
processes may not be clear, I am very sorry.
Hi Fabian,
Thanks for the information.
I have been reading about it and doing the same as a part of flink job
written in Java
I am using proctime for both the tables. Can you please verify once the
implementation of temporal tables
here is the snippet.
public class
Btw. there is a set difference or minus operator in the Table API [1] that
might be helpful.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/tableApi.html#set-operations
Am Fr., 20. Sept. 2019 um 15:30 Uhr schrieb Fabian Hueske :
> Hi Juan,
>
> Both, the local execution
Hi,
It depends.
There are many things that can be changed. A savepoint in Flink contains
only the state of the application and not the configuration of the system.
So an application can be migrated to another cluster that runs with a
different configuration.
There are some exceptions like the con
Hi,
Oh, now I understand your problem.
I dont' think that Flink is able to remove the metadata early. The
implementation is designed for the general case which needs to support the
case where the window data is not purged.
Something that might work is to not configure the window operator with
allo
Hi Jun,
Thank you very much for your contribution.
I think a Bucketing File System Table Sink would be a great addition.
Our code contribution guidelines [1] recommend to discuss the design with
the community before opening a PR.
First of all, this ensures that the design is aligned with Flink's
Hi Juan,
Both, the local execution environment and the remote execution environment
run the same code to execute the program.
The implementation of the sortPartition operator was designed to scale to
data sizes that exceed the memory.
Internally, it serializes all records into byte arrays and sort
Hi RAMALINGESWARA,
Are you sure it's reading your input date correctly? Asking this because I
saw the default input date (which is applied if there is no input data
offered) is just 15 elements.
Actually the default number of iterations is 10. You could pass a parameter
"--iterations $the_number_
Ah, now I understand what exactly your requirement is.
I don't think there is such a tool in Flink which could help you to fetch
and store the content of rest api. It seems not to be a general requirement.
But I'm really interested in the motivation behind your requirement. Could
you share more a
17 matches
Mail list logo