Hi,
Currently Flink uses Kryo as the default serializer for data types that
Flink's type serialization stack doesn't support [1].
This also includes serializers being used for managed state registered by
users.
Because of this, at the moment it's not easy to upgrade the Kryo version,
since it is
Hi Stefan,
Thank you for the confirmation.
Doing a one time cleanup with full snapshot and upgrading to Flink 1.8
could work. However, in our case, the state is quite large (TBs).
Taking a savepoint takes over an hour, during which we have to pause
the job or it may process more events.
The Java
Hi ,
Flink 1.7 still uses kryo-2.24.0. Is there any specific reason for not
upgrading kryo?
Thanks,
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
That's certainly the safe thing to do, but if you do not mutate the object,
a copy is not strictly necessary.
On Thu, Mar 14, 2019 at 9:19 PM Kurt Young wrote:
> Keep one thing in mind: if you want the element remains legal after the
> function call ends (maybe map(), flatmap(), depends on wha
Hi Experts,
When I am using the following sentence in Flink-SQL
if(item_name=‘xxx',u.user_id,null)
The following exception was throw out, which is a bit confusing,
because it is actually caused by there is no if function in Flink-SQL, I think
it is more clearly to just poi
Hi,
As the message said, some columns share the same names. You could first rename
the columns of one table with the `as` operation [1].
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tableApi.html#scan-projection-and-filter
> On Mar 15, 2019, at 9:03 AM,
Hi Kurt,
Thanks for getting back and explaining this. The behavior in this case makes
more sense now after your explanation + reading the dynamic tables article. I
was able to hook up the Scoped aggregation like you suggested so I have a
workaround for now. I guess the part that I’m trying to f
Exception in thread "main" org.apache.flink.table.api.ValidationException: join
relations with ambiguous names: id, name, value
at
org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:156)
at
org.apache.flink.table.plan.logical.Join.validate(operators.
Hi Gary,
Thanks for the answer. I missed your most recent answer in this thread too.
However, my last question
Averell wrote
> How about changing the configuration of the Flink job itself during
> runtime?
> What I have to do now is to take a savepoint, stop the job, change the
> configuration,
Hi Gary,
Thanks a lot for the explanation, and sorry for missing your earlier
message.
I am clear now.
Thanks and regards,
Averell
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Fabian,
I understand this is a by-design behavior, since Flink is firstly built for
streaming. Supporting batch shuffle and custom partition number in Flink may be
compelling in batch processing.
Could you help explain a bit more on which works are needed to be done, so
Flink can support c
Yes, we are submitting more than one job and we choose which one is going to be
executed depending on the first program argument (i.e., ‘job’ argument).
From: Chesnay Schepler
Sent: Παρασκευή, 15 Μαρτίου 2019 12:53 μμ
To: Papadopoulos, Konstantinos ;
user@flink.apache.org
Subject: Re: ProgramIn
In your jar, are you submitting multiple jobs in parallel?
On 15.03.2019 10:05, Papadopoulos, Konstantinos wrote:
We had some progress since the job seems to be submitted and its
execution has been started, but, now, I am getting a
ProgramAbortException as follows:
05:01:01.788 [ERROR] Spri
Thank you for reaching out to Infra and the ember client.
When I first saw the Ember repository, I thought it is the whole thing
(frontend and backend), but while testing it, I realized it is "only" the
frontend. I'm not sure if it makes sense to adjust the Ember observer
client, or just write a si
We had some progress since the job seems to be submitted and its execution has
been started, but, now, I am getting a ProgramAbortException as follows:
05:01:01.788 [ERROR] SpringApplication - Application run failed
org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException:
Please separate your program arguments by a space instead of a comma and
try again.
On 15.03.2019 09:34, Papadopoulos, Konstantinos wrote:
Hi Chesnay,
Sorry for the misunderstanding. I get the following exception:
2019-03-15 04:31:26,826 ERROR
org.apache.flink.runtime.webmonitor.handlers.Ja
Hi Chesnay,
Sorry for the misunderstanding. I get the following exception:
2019-03-15 04:31:26,826 ERROR
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler- Exception
occurred in REST handler.
org.apache.flink.runtime.rest.handler.RestHandlerException:
org.apache.flink.client.progr
Hi,
Flink works a bit differently than Spark.
By default, Flink uses pipelined shuffles which push results of the sender
immediately to the receivers (btw. this is one of the building blocks for
stream processing).
However, pipelined shuffles require that all receivers are online. Hence,
there num
Please provide the logged exception, I cannot help you otherwise.
On 14.03.2019 14:20, Papadopoulos, Konstantinos wrote:
It seems that Flink cluster does not retrieve program arguments
correctly. For reference, I sent the following request:
Method Type: POST
URL:
http://dbtpa05p.ch3.dev.i.
Hi Averell,
I think I have answered your question previously [1]. The bottom line is
that
the error is logged on INFO level in the ExecutionGraph [2]. However, your
effective log level (of the root logger) is WARN. The log levels are ordered
as follows [3]:
TRACE < DEBUG < INFO < WARN < ERRO
Hi Gary,
The job manager was indeed being invoked with a second parameter.
${Flink_HOME}/bin/jobmanager.sh start cluster
I removed the second argument and everything works fine now. I really
appreciate your help. Thanks a lot :-)
Regards,
Harshith
From: Gary Yao
Date: Friday, 15 March 2019 a
I forgot to add line numbers to the first link in my previous email:
https://github.com/apache/flink/blob/c6878aca6c5aeee46581b4d6744b31049db9de95/flink-dist/src/main/flink-bin/bin/jobmanager.sh#L21-L25
On Fri, Mar 15, 2019 at 8:08 AM Gary Yao wrote:
> Hi Harshith,
>
> In the jobmanager.sh scr
Hi Harshith,
In the jobmanager.sh script, the 2nd argument is assigned to the HOST
variable
[1]. How are you invoking jobmanager.sh? Prior to 1.5, the script expected
an
execution mode (local or cluster) but this is no longer the case [2].
Best,
Gary
[1]
https://github.com/apache/flink/blob/c687
23 matches
Mail list logo