Hello Chesnay,
Thank you for your help. After receiving your message I recompiled my
version of Flink completely, and both the NullPointerException listed in
the TODO and the ClassCastException with the join operation went away.
Previously, I had been only recompiling the modules of Flink that had
Hi,all
scala program can't direct use createLocalEnvironment with custom Configure
object.
such as I want to start web server in local mode with Flink UI, I will do such
as:
```
// set up execution environment
val conf = new Configuration
conf.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER,
thanks. worth mentioning in the release notes of 1.1.2 that file source is
broken. we spent a substantial time on trying to figure out what's the
root cause.
On Sep 27, 2016 9:40 PM, "Stephan Ewen" wrote:
> Sorry for the inconvenience. This is a known issue and being fixed for
> Flink 1.1.3 - t
Stephan Ewen created FLINK-4702:
---
Summary: Kafka consumer must commit offsets asynchronously
Key: FLINK-4702
URL: https://issues.apache.org/jira/browse/FLINK-4702
Project: Flink
Issue Type: Bug
Sorry for the inconvenience. This is a known issue and being fixed for
Flink 1.1.3 - the problem is that the streaming File sources were reworked
to continuously monitor the File System, but the watermarks are not handled
correctly.
https://issues.apache.org/jira/browse/FLINK-4329
So far, 2/3 par
Hi Chen,
Please upload your Flink scala library dependencies.
Regards
Sunny.
On Tue, Sep 27, 2016 at 5:56 PM, Chen Bekor wrote:
> Hi,
>
> Since upgrading to flink 1.1.2 from 1.0.2 - I'm experiencing regression in
> my code. In order to Isolate the issue I have written a small flink job
> that
Ted Yu created FLINK-4701:
-
Summary: Unprotected access to cancelables in StreamTask
Key: FLINK-4701
URL: https://issues.apache.org/jira/browse/FLINK-4701
Project: Flink
Issue Type: Bug
R
Hi,
Since upgrading to flink 1.1.2 from 1.0.2 - I'm experiencing regression in
my code. In order to Isolate the issue I have written a small flink job
that demonstrates that.
The job does some time based window operations with an input csv file (in
the example below - count the number of events o
Kostas Kloudas created FLINK-4700:
-
Summary: Harden the TimeProvider test
Key: FLINK-4700
URL: https://issues.apache.org/jira/browse/FLINK-4700
Project: Flink
Issue Type: Bug
Compon
Timo Walther created FLINK-4699:
---
Summary: Convert Kafka TableSource/TableSink tests to unit tests
Key: FLINK-4699
URL: https://issues.apache.org/jira/browse/FLINK-4699
Project: Flink
Issue Typ
Stephan Ewen created FLINK-4698:
---
Summary: Visualize additional checkpoint information
Key: FLINK-4698
URL: https://issues.apache.org/jira/browse/FLINK-4698
Project: Flink
Issue Type: Sub-task
Stephan Ewen created FLINK-4697:
---
Summary: Gather more detailed checkpoint stats in
CheckpointStatsTracker
Key: FLINK-4697
URL: https://issues.apache.org/jira/browse/FLINK-4697
Project: Flink
Hi all,
As the title of this email suggests, I am proposing to remove the methods
deleteProcessingTimeTimer(long time) and deleteEventTimeTimer(long time)
from the WindowOperator.Context. With this change, registered timers that
have nothing to do (e.g. because their state has already been clea
It's nice. Will present flink source connector be pushed to bahir-flink?
I can add netty-source to bahir-flink.
Maven repository have no bahir-flink's.
https://mvnrepository.com/artifact/org.apache.bahir
-邮件原件-
发件人: Greg Hogan [mailto:c...@greghogan.com]
发送时间: 2016年9月27日 20:58
收件人: dev@f
Stephan Ewen created FLINK-4696:
---
Summary: Limit the number of Akka Dispatcher Threads in
LocalMiniCluster
Key: FLINK-4696
URL: https://issues.apache.org/jira/browse/FLINK-4696
Project: Flink
Apache Bahir's website only suggests support for additional frameworks, but
there is a Flink repository at
https://github.com/apache/bahir-flink
On Tue, Sep 27, 2016 at 8:38 AM, shijinkui wrote:
> Hey, Stephan Ewen
>
> 1. bahir's target is spark. The contributer are rxin, srowen, tdas,
>
Till Rohrmann created FLINK-4695:
Summary: Separate configuration parsing from MetricRegistry
Key: FLINK-4695
URL: https://issues.apache.org/jira/browse/FLINK-4695
Project: Flink
Issue Type:
Hey, Stephan Ewen
1. bahir's target is spark. The contributer are rxin, srowen, tdas, mateiz
and so on.
If we want bahir used by flink, we can suggest bahir provide streaming
connecter interface, such as store(), start(), stop(), restart(),
receiving(Any)...
Then same strea
Hi Team,
I am wondering is that possible to add JDBC connection as a source or
target in Flink using Scala.
Could you kindly some one help on this? DB write/sink code is not working.
if you have any sample code please share it here.
*Thanks*
*Sunny*
[image: Inline image 1]
Hi Aljoscha,
My 2 cents on this would be that it is worth maintaining the access to the
watermarks. I think having the option to customize this is a strong point of
Flink.
Regarding the solution you proposed based on 2 input timers " would fire if the
watermark from both inputs advances suffic
Hi Folks,
I'm in the process of implementing
https://issues.apache.org/jira/browse/FLINK-3674 and now
I'm having a bit of a problem with deciding how watermarks should be
treated for operators that have more than one input.
The problem is deciding when to fire event-time timers. For one-input
oper
Thanks Stephan for the prompt response!
Glad to know it's targeted for Flink 2.0. Is there any JIRA tracking this?
I couldn't find such one, :-)
Thanks!
Liwei
On Tue, Sep 27, 2016 at 4:47 PM, Stephan Ewen wrote:
> Hi!
>
> Yes, there are definitely plans and desires to do that, definitely. Ma
Hi,
the mapping should not be updated in the Flink sink. According to the
documentation the mapping is a setting on an index that should not be
changed after an index was created and some documents were added to that
index:
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.htm
I think that could be an interesting source. Two quick questions to move
forward
- To keep the Flink code base from becoming too big (hard to maintain and
test) we started working with Apache Bahir as a project dedicated to
streaming connectors. Would that be a good target for the connector?
Hi!
Yes, there are definitely plans and desires to do that, definitely. May be
breaking some API / dependency structure, so probably a candidate for Flink
2.0
Greetings,
Stephan
On Tue, Sep 27, 2016 at 10:45 AM, Liwei Lin wrote:
> Hi folks,
>
> There are comments like this in
> `StreamExecuti
Till Rohrmann created FLINK-4694:
Summary: Add wait for termination function to RpcEndpoints
Key: FLINK-4694
URL: https://issues.apache.org/jira/browse/FLINK-4694
Project: Flink
Issue Type: S
Hi folks,
There are comments like this in
`StreamExecutionEnvironment.getExecutionEnvironment()`:
// because the streaming project depends on "flink-clients" (and not the
> other way around)
> // we currently need to intercept the data set environment and create a
> dependent stream env.
> // thi
Timo Walther created FLINK-4693:
---
Summary: Add session group-windows for batch tables
Key: FLINK-4693
URL: https://issues.apache.org/jira/browse/FLINK-4693
Project: Flink
Issue Type: Su
Timo Walther created FLINK-4692:
---
Summary: Add tumbling and sliding group-windows for batch tables
Key: FLINK-4692
URL: https://issues.apache.org/jira/browse/FLINK-4692
Project: Flink
Issue Typ
Timo Walther created FLINK-4691:
---
Summary: Add group-windows for streaming tables
Key: FLINK-4691
URL: https://issues.apache.org/jira/browse/FLINK-4691
Project: Flink
Issue Type: Sub-task
30 matches
Mail list logo