I had small chat with Till about how to help manage Flink ML Libraries
contributions, which use Flink ML as dependencies.
I suppose if this approached is the way to go for Flink connectors,
could we do the same for Flink ML libraries?
- Henry
On Fri, Dec 11, 2015 at 1:33 AM, Maximilian Michels
Greg Hogan created FLINK-3164:
-
Summary: Spread out scheduling strategy
Key: FLINK-3164
URL: https://issues.apache.org/jira/browse/FLINK-3164
Project: Flink
Issue Type: Improvement
Comp
Greg Hogan created FLINK-3163:
-
Summary: Configure Flink for NUMA systems
Key: FLINK-3163
URL: https://issues.apache.org/jira/browse/FLINK-3163
Project: Flink
Issue Type: Improvement
Co
Greg Hogan created FLINK-3162:
-
Summary: Configure number of TaskManager slots as ratio of
available processors
Key: FLINK-3162
URL: https://issues.apache.org/jira/browse/FLINK-3162
Project: Flink
Ah I see. This explains the issues I had with submitting streaming jobs
that package JDBC drivers. Is there a second in the guide/docs about
classloader considerations with Flink?
On Thu, Dec 10, 2015 at 11:53 PM, Stephan Ewen wrote:
> Flink's classloading is different from Hadoop's.
>
> In Hado
Greg Hogan created FLINK-3161:
-
Summary: Externalize cluster start-up and tear-down when available
Key: FLINK-3161
URL: https://issues.apache.org/jira/browse/FLINK-3161
Project: Flink
Issue Type:
Hi Stephan,
I’m using DataStream.writeAsText(String path, WriteMode writemode) for my
sink. The data is written to disk and there’s plenty of space available.
I looked deeper into the logs and found out that the jobs on 174 and 175
are not actually stuck, but they’re moving extremely slowly, This
Greg Hogan created FLINK-3160:
-
Summary: Aggregate operator statistics by TaskManager
Key: FLINK-3160
URL: https://issues.apache.org/jira/browse/FLINK-3160
Project: Flink
Issue Type: Improvement
Hi Ali!
I see, so the tasks 192.168.200.174 and 192.168.200.175 apparently do not
make progress, even do not recognize the end-of-stream point.
I expect that the streams on 192.168.200.174 and 192.168.200.175 are
back-pressured to a stand-still. Since no network is involved, the reason
for the ba
Hi Stephan,
I got a request to share the image with someone and I assume it was you.
You should be able to see it now. This seems to be the main issue I have
at this time. I've tried running the job on the cluster with a parallelism
of 16, 24, 36, and even went up to 48. I see all the parallel pip
Awesome! Thanks a lot!
On 12/11/2015 11:18 AM, Slim Baltagi wrote:
> Hi Matthias
>
> I already shared your blog at Linkedin forums covering 255, 758 members!
>
> Big Data and Analytics 160, 316
> Hadoop Users 74,333
> Big Data, Low Latency19,949
> Apache Storm
Great! :)
On Fri, Dec 11, 2015 at 11:18 AM, Slim Baltagi wrote:
> Hi Matthias
>
> I already shared your blog at Linkedin forums covering 255, 758 members!
>
> Big Data and Analytics 160, 316
> Hadoop Users 74,333
> Big Data, Low Latency19,949
> Apache Storm
Hi Matthias
I already shared your blog at Linkedin forums covering 255, 758 members!
Big Data and Analytics 160, 316
Hadoop Users 74,333
Big Data, Low Latency19,949
Apache Storm 955
Apache Flink 205
Slim
On Dec 11, 2015, at 4:1
Just published it. Spread the word :)
Thanks for all your valuable feedback!
On 12/10/2015 01:17 PM, Matthias J. Sax wrote:
> Thanks for all your feedback! I updated the PR.
>
> I would like to publish the post today. Please let me know if you have
> any more comments on the draft.
>
> -Matthia
We should have release branches which are in sync with the release
branches in the main repository. Connectors should be compatible
across minor releases. The versioning could be of the form
"flinkversion-connectorversion", e.g. 0.10-connector1.
>The pluggable architecture is great! (why Don't we
+1 from my side as well. Good idea.
On Thu, Dec 10, 2015 at 11:00 PM, jun aoki wrote:
> The pluggable architecture is great! (why Don't we call it Flink plugins?
> my 2 cents)
> It will be nice if we come up with an idea of what directory structure
> should look like before start dumping connect
16 matches
Mail list logo