Boris Osipov created FLINK-5032:
---
Summary: CsvOutputFormatTest fails on Windows OS
Key: FLINK-5032
URL: https://issues.apache.org/jira/browse/FLINK-5032
Project: Flink
Issue Type: Bug
Fabian Hueske created FLINK-5031:
Summary: Consecutive DataStream.split() ignored
Key: FLINK-5031
URL: https://issues.apache.org/jira/browse/FLINK-5031
Project: Flink
Issue Type: Bug
Opened a PR for this issue:
https://github.com/apache/flink/pull/2766
To explain, the PR refactors the NettyClient/NettyServer to be reusable for the
ever-growing set of endpoints within Flink.
The procedure for defining an endpoint would now be:
1. Create a configuration subclass of `NettyCon
Greetings,
I have Flink, Zookeeper and Kafka up and running on my local. I tested Flink
out with the Hamlet.txt and Flink performed fine.However, in attempting to have
Flink write cleansed TaxiRide data stream to Kafka, i am seeing connection
error to JobManager in attached log.
Any ideas would
Eron Wright created FLINK-5030:
---
Summary: Support hostname verification
Key: FLINK-5030
URL: https://issues.apache.org/jira/browse/FLINK-5030
Project: Flink
Issue Type: Sub-task
Re
Eron Wright created FLINK-5029:
---
Summary: Implement KvState SSL
Key: FLINK-5029
URL: https://issues.apache.org/jira/browse/FLINK-5029
Project: Flink
Issue Type: Sub-task
Reporter:
Hi Jaromir,
You can make use of Custom Trigger and set the allowed lateness to max
value.
I have kept the custom trigger code (EventTimeTrigger) same as Flink 1.0.3,
doing this the late elements will not be discarded and they will be
assigned single windows , now you can decide what you want to d
You're right if you want to guarantee a deterministic computation for an
arbitrary allowed lateness. In the general case, you would never be able to
calculate the final result of a window in a finite time, because there
might always be another element which arrives later. However, for most
practica
I didnt measure size, just with vs. w/out Redis made a day & night difference
in performance so I replaced it with Java concurrent HM objects.Still in
progress as far as benchmarking. Have issues with tuning Flink for very high
loads...
You can see some of my communications with Aljoscha...Chee
Hi Till, thank you for your answer.
I am afraid defining an allowed lateness won't help. It will just change the
problem by constant time. If we agree an element can come in arbitrary time
after watermark (depends on the network latency), it may be assigned to the
window or may be not if comes be
Stephan Ewen created FLINK-5028:
---
Summary: Stream Tasks must not go through clean shutdown logic on
cancellation
Key: FLINK-5028
URL: https://issues.apache.org/jira/browse/FLINK-5028
Project: Flink
Till Rohrmann created FLINK-5027:
Summary: FileSource finishes successfully with a wrong path
Key: FLINK-5027
URL: https://issues.apache.org/jira/browse/FLINK-5027
Project: Flink
Issue Type:
Aljoscha Krettek created FLINK-5026:
---
Summary: Rename TimelyFlatMap to Process
Key: FLINK-5026
URL: https://issues.apache.org/jira/browse/FLINK-5026
Project: Flink
Issue Type: Improvement
Thanks, how big was your state (GBs)?
Can you share your benchmark/s?
Best,
Ovidiu
-Original Message-
From: amir bahmanyari [mailto:amirto...@yahoo.com.INVALID]
Sent: Tuesday, October 25, 2016 7:24 PM
To: dev@flink.apache.org
Subject: Re: [FLINK-3035] Redis as State Backend
FYI.I was us
Thank you, I will check this fix in my environment.
Best,
Ovidiu
-Original Message-
From: Aljoscha Krettek [mailto:aljos...@apache.org]
Sent: Friday, October 21, 2016 5:47 PM
To: dev@flink.apache.org
Subject: Re: TopSpeedWindowing - in error: Could not forward element to next
operator
Great, thanks!
Best,
Ovidiu
-Original Message-
From: Aljoscha Krettek [mailto:aljos...@apache.org]
Sent: Monday, October 24, 2016 3:11 PM
To: dev@flink.apache.org
Subject: Re: [FLINK-3035] Redis as State Backend
Hi,
regarding RocksDB, yes this is possible because RocksDB is essentially
Hi Jaromir,
deterministic processing with late elements is indeed more difficult than
without them. What you have to do is to send updates to your downstream
operators in case that you see late elements. This can either be an
incremental update or a retraction with the corrected value. It basicall
Niels Basjes created FLINK-5025:
---
Summary: Job fails because of Optimizer bug
Key: FLINK-5025
URL: https://issues.apache.org/jira/browse/FLINK-5025
Project: Flink
Issue Type: Bug
Affects Ve
Hi Artur,
I'm not sure if Flink is the best tool to write data from Android Apps to
Kafka.
Flink would be a good choice to process such data by reading it from Kafka.
I would reach out to the Kafka user list and seek for advice there.
Best, Fabian
Btw. The Apache dev@ mailing lists are meant to
19 matches
Mail list logo