Hi Chesnay
Adding on to this point you made - " the rpc address is still *required *due
to some technical implementations; it may be that you can set this to some
arbitrary value however."
For job submission to happen successfully we should give specific rpc
address and not any arbitrary value. I
Hi Chesnay
If REST API (i.e. the web server) is mandatory for submitting jobs then why
is there an option to set rest.port to -1? I think it should be mandatory
to set some valid port for rest.port and make sure flink job manager does
not come up if valid port is not set for rest.port? Or else the
Sihua Zhou created FLINK-9619:
-
Summary: Always close the task manager connection when the
container is completed in YarnResourceManager
Key: FLINK-9619
URL: https://issues.apache.org/jira/browse/FLINK-9619
Thanks to Fabian and Timo, I watched the scalar udf and find it is very quick
to implements a case when udf for the specify logic to meet my necessary
Cheers
Minglei
> 在 2018年6月19日,下午10:52,Fabian Hueske 写道:
>
> I see, then this case wasn't covered by the fix that we added for Flink
> 1.5.0.
Aaron Langford created FLINK-9618:
-
Summary: NullPointerException in FlinkKinesisProducer when
aws.region is not set and aws.endpoint is set
Key: FLINK-9618
URL: https://issues.apache.org/jira/browse/FLINK-9618
Hi Amol,
I think you could try (based on your stack overflow code)
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor
like this:
DataStream streamSource = env
.addSource(kafkaConsumer)
.setParallelism(4)
.assignTimestampsAndWatermarks(
new
Hi Amol,
I think you could try (based on your stack overflow code)
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor
like this:
DataStream streamSource = env
.addSource(kafkaConsumer)
.setParallelism(4)
.assignTimestampsAndWatermarks(
Piotr Nowojski created FLINK-9617:
-
Summary: Provide alias for whole records in Table API
Key: FLINK-9617
URL: https://issues.apache.org/jira/browse/FLINK-9617
Project: Flink
Issue Type: Impr
I see, then this case wasn't covered by the fix that we added for Flink
1.5.0.
I guess the problem is that the code is needed to evaluate a single field.
Implementing a scalar user-function is not very difficult [1].
However, you need to register it in the TableEnvironment before you can use
it in
Addison Higham created FLINK-9616:
-
Summary: DatadogHttpReporter fails to be created due to missing
shaded dependency
Key: FLINK-9616
URL: https://issues.apache.org/jira/browse/FLINK-9616
Project: Fli
Hi, Fabian, Absolutely, Flink 1.5.0 I am using for this. A big CASE WHEN
statement. Is it hard to implement ? I am a new to flink table api & sql.
Best Minglei.
> 在 2018年6月19日,下午10:36,Fabian Hueske 写道:
>
> Hi,
>
> Which version are you using? We fixed a similar issue for Flink 1.5.0.
> If you
Hi,
Which version are you using? We fixed a similar issue for Flink 1.5.0.
If you can't upgrade yet, you can also implement a user-defined function
that evaluates the big CASE WHEN statement.
Best, Fabian
2018-06-19 16:27 GMT+02:00 zhangminglei <18717838...@163.com>:
> Hi, friends.
>
> When I e
Hi, friends.
When I execute a long sql and get the follow error, how can I have a quick fix ?
org.apache.flink.api.common.InvalidProgramException: Table program cannot be
compiled. This is a bug. Please file an issue.
at
org.apache.flink.table.codegen.Compiler$class.compile(Compiler.sca
I fixed the problem indicated in your comment and added an extra test for
that.
CI is currently running the tests.
Niels
On Tue, Jun 19, 2018 at 12:19 PM, Ted Yu wrote:
> Interesting enhancement.
>
> I left a minor comment on the PR.
>
> Cheers
>
> On Tue, Jun 19, 2018 at 12:26 AM, Niels Basjes
Dominik Wosiński created FLINK-9615:
---
Summary: Add
Key: FLINK-9615
URL: https://issues.apache.org/jira/browse/FLINK-9615
Project: Flink
Issue Type: Improvement
Reporter: Dominik
mingleizhang created FLINK-9614:
---
Summary: Improve the error message for Compiler#compile
Key: FLINK-9614
URL: https://issues.apache.org/jira/browse/FLINK-9614
Project: Flink
Issue Type: Improv
In 1.5 we reworked the job-submission to go through the REST API instead
of akka.
I believe the jobmanager rpc port shouldn't be necessary anymore, the
rpc address is still /required /due to some technical implementations;
it may be that you can set this to some arbitrary value however.
As a
Hi Amol,
I'm not sure whether this is impossible, especially when you need to operate
the record in multi parallelism.
IMO, in theroy, we can only get a ordered stream when there is a single
partition of kafka and operate it with a single parallelism in flink. Even in
this case, if you on
Hello
I'm using Flink 1.5.0 version and Flink CLI to submit jar to flink cluster.
In flink 1.4.2 only job manager rpc address and job manager rpc port were
sufficient for flink client to connect to job manager and submit the job.
But in flink 1.5.0 the flink client additionally requires the rest
Hi,
I have used flink streaming API in my application where the source of
streaming is kafka. My kafka producer will publish data in ascending order
of time in different partitions of kafka and consumer will read data from
these partitions. However some kafka partitions may be slow due to some
ope
Interesting enhancement.
I left a minor comment on the PR.
Cheers
On Tue, Jun 19, 2018 at 12:26 AM, Niels Basjes wrote:
> Hi,
>
> Yesterday we ran into problems regarding the distribution of records across
> Kafka where Flink was used as the producer. So we fixed this and realized
> that the c
Sihua Zhou created FLINK-9613:
-
Summary: YARNSessionCapacitySchedulerITCase failed because
YarnTestBase.checkClusterEmpty()
Key: FLINK-9613
URL: https://issues.apache.org/jira/browse/FLINK-9613
Project: F
Hi Addison,
thanks for starting the discussion. My gut feeling is that we could solve
FLINK-9611 and FLINK-9612 both with allowing the user to specify a custom
AbstractContainerOverlay implementation. Thus, introducing an
AbstractContainerOverlayFactory instead of the specific
mesos.resourcemanage
Hi,
Yesterday we ran into problems regarding the distribution of records across
Kafka where Flink was used as the producer. So we fixed this and realized
that the code to do this would be useful to others.
I put up a Jira ticket and pull request yesterday and it passes all
automated tests.
Please
Hi Piotrek,
Thanks for your information. =)
Best Regards,
Tony Wei
2018-06-19 15:15 GMT+08:00 Piotr Nowojski :
> Hi,
>
> Besides FLIP document describing network improvements there is not much
> more and it is actually pretty up to date.
>
> I will link this document on wiki with FLIP proposals
Hi,
Besides FLIP document describing network improvements there is not much more
and it is actually pretty up to date.
I will link this document on wiki with FLIP proposals.
Piotrek
> On 19 Jun 2018, at 06:22, Tony Wei wrote:
>
> Hi,
>
> I read Flink 1.5.0 release announcements[1] recently
26 matches
Mail list logo