[jira] [Assigned] (FLINK-4407) Implement the trigger DSL

2016-08-17 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-4407:
-

Assignee: Kostas Kloudas

> Implement the trigger DSL
> -
>
> Key: FLINK-4407
> URL: https://issues.apache.org/jira/browse/FLINK-4407
> Project: Flink
>  Issue Type: Bug
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4407) Implement the trigger DSL

2016-08-17 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-4407:
-

 Summary: Implement the trigger DSL
 Key: FLINK-4407
 URL: https://issues.apache.org/jira/browse/FLINK-4407
 Project: Flink
  Issue Type: Bug
Reporter: Kostas Kloudas






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4407) Implement the trigger DSL

2016-08-17 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-4407:
--
Description: 
This issue refers to the implementation of the trigger DSL.
The specification of the DSL is under discussion here:

https://cwiki.apache.org/confluence/display/FLINK/FLIP-9%3A+Trigger+DSL


> Implement the trigger DSL
> -
>
> Key: FLINK-4407
> URL: https://issues.apache.org/jira/browse/FLINK-4407
> Project: Flink
>  Issue Type: Bug
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>
> This issue refers to the implementation of the trigger DSL.
> The specification of the DSL is under discussion here:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-9%3A+Trigger+DSL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2342: FLINK-4253 - Rename "recovery.mode" config key to "high-a...

2016-08-17 Thread ramkrish86
Github user ramkrish86 commented on the issue:

https://github.com/apache/flink/pull/2342
  
@uce 
Updated the PR with suitable doc updates and also renamed the enum 

> RecoveryMode

to 

> HighAvailabilityMode


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4253) Rename "recovery.mode" config key to "high-availability"

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424007#comment-15424007
 ] 

ASF GitHub Bot commented on FLINK-4253:
---

Github user ramkrish86 commented on the issue:

https://github.com/apache/flink/pull/2342
  
@uce 
Updated the PR with suitable doc updates and also renamed the enum 

> RecoveryMode

to 

> HighAvailabilityMode


> Rename "recovery.mode" config key to "high-availability"
> 
>
> Key: FLINK-4253
> URL: https://issues.apache.org/jira/browse/FLINK-4253
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ufuk Celebi
>Assignee: ramkrishna.s.vasudevan
>
> Currently, HA is configured via the following configuration keys:
> {code}
> recovery.mode: STANDALONE // No high availability (HA)
> recovery.mode: ZOOKEEPER // HA
> {code}
> This could be more straight forward by simply renaming the key to 
> {{high-availability}}. Furthermore, the term {{STANDALONE}} is overloaded. We 
> already have standalone cluster mode.
> {code}
> high-availability: NONE // No HA
> high-availability: ZOOKEEPER // HA via ZooKeeper
> {code}
> The {{recovery.mode}} configuration keys would have to be deprecated before 
> completely removing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4407) Implement the trigger DSL

2016-08-17 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-4407:
--
Description: 
This issue refers to the implementation of the trigger DSL.
The specification of the DSL has an open FLIP here:

https://cwiki.apache.org/confluence/display/FLINK/FLIP-9%3A+Trigger+DSL

And is currently under discussion in the dev@ mailing list here:

http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-9-Trigger-DSL-td13065.html

  was:
This issue refers to the implementation of the trigger DSL.
The specification of the DSL is under discussion here:

https://cwiki.apache.org/confluence/display/FLINK/FLIP-9%3A+Trigger+DSL



> Implement the trigger DSL
> -
>
> Key: FLINK-4407
> URL: https://issues.apache.org/jira/browse/FLINK-4407
> Project: Flink
>  Issue Type: Bug
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>
> This issue refers to the implementation of the trigger DSL.
> The specification of the DSL has an open FLIP here:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-9%3A+Trigger+DSL
> And is currently under discussion in the dev@ mailing list here:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-9-Trigger-DSL-td13065.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4355) Implement TaskManager side of registration at ResourceManager

2016-08-17 Thread Zhijiang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang Wang reassigned FLINK-4355:


Assignee: Stephan Ewen  (was: Zhijiang Wang)

> Implement TaskManager side of registration at ResourceManager
> -
>
> Key: FLINK-4355
> URL: https://issues.apache.org/jira/browse/FLINK-4355
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Zhijiang Wang
>Assignee: Stephan Ewen
>
> If the {{TaskManager}} is unregistered, it should try and register at the 
> {{ResourceManager}} leader. The registration messages are fenced via the 
> {{RmLeaderID}}.
> The ResourceManager may acknowledge the registration (or respond that the 
> TaskManager is AlreadyRegistered) or refuse the registration.
> Upon registration refusal, the TaskManager may have to kill itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2377: [Flink-4400][cluster management]Leadership Electio...

2016-08-17 Thread shixiaogang
GitHub user shixiaogang opened a pull request:

https://github.com/apache/flink/pull/2377

[Flink-4400][cluster management]Leadership Election among JobManagers

- Implement LeaderContender interface in JobMaster.
- Create and start the leader election service in JobMaster's bootstrapping.
- Reformat the JobMaster's code with the style provided by Kete.Young.
- Add an argument typed FlinkConfiguration in JobMaster's constructor.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alibaba/flink jira-4400

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2377.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2377


commit f9d5214ede8666824443fb68933893e9f98f8297
Author: xiaogang.sxg 
Date:   2016-08-17T05:46:00Z

Implement leader contention prototypes in JobMaster

commit ed5cf7556991f3ed9a8a09d8b34c4fe76c3f8a96
Author: xiaogang.sxg 
Date:   2016-08-17T07:35:23Z

reformat the code and remove the test code




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4363) Implement TaskManager basic startup of all components in java

2016-08-17 Thread Zhijiang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424056#comment-15424056
 ] 

Zhijiang Wang commented on FLINK-4363:
--

For yarn and standalone mode, the entrance methods for TaskManger are  
'selectNetworkInterfaceAndRunTaskManager' and 
'startTaskManagerComponentsAndActor' separately. 
The new TaskExecutor should follow the same method as TaskManager or define the 
new uniform method for both modes?
Another main difference is actor related initialization replaced by RpcService.
Should this subtask be started right now or wait for other related subtasks?

> Implement TaskManager basic startup of all components in java
> -
>
> Key: FLINK-4363
> URL: https://issues.apache.org/jira/browse/FLINK-4363
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Zhijiang Wang
>
> Similar with current {{TaskManager}},but implement initialization and startup 
> all components in java instead of scala.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424065#comment-15424065
 ] 

ASF GitHub Bot commented on FLINK-3298:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2314
  
>>Were there any any dependency issues / conflicts with the user job jar?
>
>Sorry, what do you mean by the "job jar"?

When adding the flink-connector-activemq as a dependency to a maven 
project, and then building a jar (=job jar) for it, the activemq connector and 
all its dependencies are contained in the jar.
Sometimes, this leads to issues with Flink's own dependencies. Therefore, I 
like testing connectors that way before merging them.


> Streaming connector for ActiveMQ
> 
>
> Key: FLINK-3298
> URL: https://issues.apache.org/jira/browse/FLINK-3298
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Mohit Sethi
>Assignee: Ivan Mushketyk
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2314: [FLINK-3298] Implement ActiveMQ streaming connector

2016-08-17 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2314
  
>>Were there any any dependency issues / conflicts with the user job jar?
>
>Sorry, what do you mean by the "job jar"?

When adding the flink-connector-activemq as a dependency to a maven 
project, and then building a jar (=job jar) for it, the activemq connector and 
all its dependencies are contained in the jar.
Sometimes, this leads to issues with Flink's own dependencies. Therefore, I 
like testing connectors that way before merging them.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1984) Integrate Flink with Apache Mesos

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424083#comment-15424083
 ] 

ASF GitHub Bot commented on FLINK-1984:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2315#discussion_r75080549
  
--- Diff: 
flink-mesos/src/main/java/org/apache/flink/mesos/runtime/clusterframework/MesosFlinkResourceManager.java
 ---
@@ -0,0 +1,755 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.mesos.runtime.clusterframework;
+
+import akka.actor.ActorRef;
+import akka.actor.Props;
+import com.netflix.fenzo.TaskRequest;
+import com.netflix.fenzo.TaskScheduler;
+import com.netflix.fenzo.VirtualMachineLease;
+import com.netflix.fenzo.functions.Action1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.mesos.runtime.clusterframework.store.MesosWorkerStore;
+import org.apache.flink.mesos.scheduler.ConnectionMonitor;
+import org.apache.flink.mesos.scheduler.LaunchableTask;
+import org.apache.flink.mesos.scheduler.LaunchCoordinator;
+import org.apache.flink.mesos.scheduler.ReconciliationCoordinator;
+import org.apache.flink.mesos.scheduler.SchedulerProxy;
+import org.apache.flink.mesos.scheduler.TaskMonitor;
+import org.apache.flink.mesos.scheduler.TaskSchedulerBuilder;
+import org.apache.flink.mesos.scheduler.Tasks;
+import org.apache.flink.mesos.scheduler.messages.AcceptOffers;
+import org.apache.flink.mesos.scheduler.messages.Disconnected;
+import org.apache.flink.mesos.scheduler.messages.Error;
+import org.apache.flink.mesos.scheduler.messages.OfferRescinded;
+import org.apache.flink.mesos.scheduler.messages.ReRegistered;
+import org.apache.flink.mesos.scheduler.messages.Registered;
+import org.apache.flink.mesos.scheduler.messages.ResourceOffers;
+import org.apache.flink.mesos.scheduler.messages.StatusUpdate;
+import org.apache.flink.mesos.util.MesosConfiguration;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
+import org.apache.flink.runtime.clusterframework.FlinkResourceManager;
+import 
org.apache.flink.runtime.clusterframework.messages.FatalErrorOccurred;
+import org.apache.flink.runtime.clusterframework.messages.StopCluster;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.mesos.Protos;
+import org.apache.mesos.Protos.FrameworkInfo;
+import org.apache.mesos.SchedulerDriver;
+import org.slf4j.Logger;
+import scala.Option;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Flink Resource Manager for Apache Mesos.
+ */
+public class MesosFlinkResourceManager extends 
FlinkResourceManager {
+
+   /** The Mesos configuration (master and framework info) */
+   private final MesosConfiguration mesosConfig;
+
+   /** The TaskManager container parameters (like container memory size) */
+   private final MesosTaskManagerParameters taskManagerParameters;
+
+   /** Context information used to start a TaskManager Java process */
+   private final Protos.TaskInfo.Builder taskManagerLaunchContext;
+
+   /** Number of failed Mesos tasks before stopping the application. -1 
means infinite. */
+   private final int maxFailedTasks;
+
+   /** Callback handler for the asynchronous Mesos scheduler */
+   private SchedulerProxy schedulerCallbackHandler;
+
+   /** Mesos scheduler driver */
+   private SchedulerDriver schedulerDriver;
+
+   private ActorRef connectionMonitor;
+
+   p

[GitHub] flink pull request #2315: [FLINK-1984] Integrate Flink with Apache Mesos (T1...

2016-08-17 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2315#discussion_r75080549
  
--- Diff: 
flink-mesos/src/main/java/org/apache/flink/mesos/runtime/clusterframework/MesosFlinkResourceManager.java
 ---
@@ -0,0 +1,755 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.mesos.runtime.clusterframework;
+
+import akka.actor.ActorRef;
+import akka.actor.Props;
+import com.netflix.fenzo.TaskRequest;
+import com.netflix.fenzo.TaskScheduler;
+import com.netflix.fenzo.VirtualMachineLease;
+import com.netflix.fenzo.functions.Action1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.mesos.runtime.clusterframework.store.MesosWorkerStore;
+import org.apache.flink.mesos.scheduler.ConnectionMonitor;
+import org.apache.flink.mesos.scheduler.LaunchableTask;
+import org.apache.flink.mesos.scheduler.LaunchCoordinator;
+import org.apache.flink.mesos.scheduler.ReconciliationCoordinator;
+import org.apache.flink.mesos.scheduler.SchedulerProxy;
+import org.apache.flink.mesos.scheduler.TaskMonitor;
+import org.apache.flink.mesos.scheduler.TaskSchedulerBuilder;
+import org.apache.flink.mesos.scheduler.Tasks;
+import org.apache.flink.mesos.scheduler.messages.AcceptOffers;
+import org.apache.flink.mesos.scheduler.messages.Disconnected;
+import org.apache.flink.mesos.scheduler.messages.Error;
+import org.apache.flink.mesos.scheduler.messages.OfferRescinded;
+import org.apache.flink.mesos.scheduler.messages.ReRegistered;
+import org.apache.flink.mesos.scheduler.messages.Registered;
+import org.apache.flink.mesos.scheduler.messages.ResourceOffers;
+import org.apache.flink.mesos.scheduler.messages.StatusUpdate;
+import org.apache.flink.mesos.util.MesosConfiguration;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
+import org.apache.flink.runtime.clusterframework.FlinkResourceManager;
+import 
org.apache.flink.runtime.clusterframework.messages.FatalErrorOccurred;
+import org.apache.flink.runtime.clusterframework.messages.StopCluster;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.mesos.Protos;
+import org.apache.mesos.Protos.FrameworkInfo;
+import org.apache.mesos.SchedulerDriver;
+import org.slf4j.Logger;
+import scala.Option;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Flink Resource Manager for Apache Mesos.
+ */
+public class MesosFlinkResourceManager extends 
FlinkResourceManager {
+
+   /** The Mesos configuration (master and framework info) */
+   private final MesosConfiguration mesosConfig;
+
+   /** The TaskManager container parameters (like container memory size) */
+   private final MesosTaskManagerParameters taskManagerParameters;
+
+   /** Context information used to start a TaskManager Java process */
+   private final Protos.TaskInfo.Builder taskManagerLaunchContext;
+
+   /** Number of failed Mesos tasks before stopping the application. -1 
means infinite. */
+   private final int maxFailedTasks;
+
+   /** Callback handler for the asynchronous Mesos scheduler */
+   private SchedulerProxy schedulerCallbackHandler;
+
+   /** Mesos scheduler driver */
+   private SchedulerDriver schedulerDriver;
+
+   private ActorRef connectionMonitor;
+
+   private ActorRef taskRouter;
+
+   private ActorRef launchCoordinator;
+
+   private ActorRef reconciliationCoordinator;
+
+   private MesosWorkerStore workerStore;
+
+   final Map workersInNew;
+   final Map workersInLau

[GitHub] flink pull request #2315: [FLINK-1984] Integrate Flink with Apache Mesos (T1...

2016-08-17 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2315#discussion_r75080685
  
--- Diff: 
flink-mesos/src/main/java/org/apache/flink/mesos/runtime/clusterframework/MesosFlinkResourceManager.java
 ---
@@ -0,0 +1,755 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.mesos.runtime.clusterframework;
+
+import akka.actor.ActorRef;
+import akka.actor.Props;
+import com.netflix.fenzo.TaskRequest;
+import com.netflix.fenzo.TaskScheduler;
+import com.netflix.fenzo.VirtualMachineLease;
+import com.netflix.fenzo.functions.Action1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.mesos.runtime.clusterframework.store.MesosWorkerStore;
+import org.apache.flink.mesos.scheduler.ConnectionMonitor;
+import org.apache.flink.mesos.scheduler.LaunchableTask;
+import org.apache.flink.mesos.scheduler.LaunchCoordinator;
+import org.apache.flink.mesos.scheduler.ReconciliationCoordinator;
+import org.apache.flink.mesos.scheduler.SchedulerProxy;
+import org.apache.flink.mesos.scheduler.TaskMonitor;
+import org.apache.flink.mesos.scheduler.TaskSchedulerBuilder;
+import org.apache.flink.mesos.scheduler.Tasks;
+import org.apache.flink.mesos.scheduler.messages.AcceptOffers;
+import org.apache.flink.mesos.scheduler.messages.Disconnected;
+import org.apache.flink.mesos.scheduler.messages.Error;
+import org.apache.flink.mesos.scheduler.messages.OfferRescinded;
+import org.apache.flink.mesos.scheduler.messages.ReRegistered;
+import org.apache.flink.mesos.scheduler.messages.Registered;
+import org.apache.flink.mesos.scheduler.messages.ResourceOffers;
+import org.apache.flink.mesos.scheduler.messages.StatusUpdate;
+import org.apache.flink.mesos.util.MesosConfiguration;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
+import org.apache.flink.runtime.clusterframework.FlinkResourceManager;
+import 
org.apache.flink.runtime.clusterframework.messages.FatalErrorOccurred;
+import org.apache.flink.runtime.clusterframework.messages.StopCluster;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.mesos.Protos;
+import org.apache.mesos.Protos.FrameworkInfo;
+import org.apache.mesos.SchedulerDriver;
+import org.slf4j.Logger;
+import scala.Option;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Flink Resource Manager for Apache Mesos.
+ */
+public class MesosFlinkResourceManager extends 
FlinkResourceManager {
+
+   /** The Mesos configuration (master and framework info) */
+   private final MesosConfiguration mesosConfig;
+
+   /** The TaskManager container parameters (like container memory size) */
+   private final MesosTaskManagerParameters taskManagerParameters;
+
+   /** Context information used to start a TaskManager Java process */
+   private final Protos.TaskInfo.Builder taskManagerLaunchContext;
+
+   /** Number of failed Mesos tasks before stopping the application. -1 
means infinite. */
+   private final int maxFailedTasks;
+
+   /** Callback handler for the asynchronous Mesos scheduler */
+   private SchedulerProxy schedulerCallbackHandler;
+
+   /** Mesos scheduler driver */
+   private SchedulerDriver schedulerDriver;
+
+   private ActorRef connectionMonitor;
+
+   private ActorRef taskRouter;
+
+   private ActorRef launchCoordinator;
+
+   private ActorRef reconciliationCoordinator;
+
+   private MesosWorkerStore workerStore;
+
+   final Map workersInNew;
+   final Map workersInLau

[jira] [Commented] (FLINK-1984) Integrate Flink with Apache Mesos

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424085#comment-15424085
 ] 

ASF GitHub Bot commented on FLINK-1984:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2315#discussion_r75080685
  
--- Diff: 
flink-mesos/src/main/java/org/apache/flink/mesos/runtime/clusterframework/MesosFlinkResourceManager.java
 ---
@@ -0,0 +1,755 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.mesos.runtime.clusterframework;
+
+import akka.actor.ActorRef;
+import akka.actor.Props;
+import com.netflix.fenzo.TaskRequest;
+import com.netflix.fenzo.TaskScheduler;
+import com.netflix.fenzo.VirtualMachineLease;
+import com.netflix.fenzo.functions.Action1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.mesos.runtime.clusterframework.store.MesosWorkerStore;
+import org.apache.flink.mesos.scheduler.ConnectionMonitor;
+import org.apache.flink.mesos.scheduler.LaunchableTask;
+import org.apache.flink.mesos.scheduler.LaunchCoordinator;
+import org.apache.flink.mesos.scheduler.ReconciliationCoordinator;
+import org.apache.flink.mesos.scheduler.SchedulerProxy;
+import org.apache.flink.mesos.scheduler.TaskMonitor;
+import org.apache.flink.mesos.scheduler.TaskSchedulerBuilder;
+import org.apache.flink.mesos.scheduler.Tasks;
+import org.apache.flink.mesos.scheduler.messages.AcceptOffers;
+import org.apache.flink.mesos.scheduler.messages.Disconnected;
+import org.apache.flink.mesos.scheduler.messages.Error;
+import org.apache.flink.mesos.scheduler.messages.OfferRescinded;
+import org.apache.flink.mesos.scheduler.messages.ReRegistered;
+import org.apache.flink.mesos.scheduler.messages.Registered;
+import org.apache.flink.mesos.scheduler.messages.ResourceOffers;
+import org.apache.flink.mesos.scheduler.messages.StatusUpdate;
+import org.apache.flink.mesos.util.MesosConfiguration;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
+import org.apache.flink.runtime.clusterframework.FlinkResourceManager;
+import 
org.apache.flink.runtime.clusterframework.messages.FatalErrorOccurred;
+import org.apache.flink.runtime.clusterframework.messages.StopCluster;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.mesos.Protos;
+import org.apache.mesos.Protos.FrameworkInfo;
+import org.apache.mesos.SchedulerDriver;
+import org.slf4j.Logger;
+import scala.Option;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Flink Resource Manager for Apache Mesos.
+ */
+public class MesosFlinkResourceManager extends 
FlinkResourceManager {
+
+   /** The Mesos configuration (master and framework info) */
+   private final MesosConfiguration mesosConfig;
+
+   /** The TaskManager container parameters (like container memory size) */
+   private final MesosTaskManagerParameters taskManagerParameters;
+
+   /** Context information used to start a TaskManager Java process */
+   private final Protos.TaskInfo.Builder taskManagerLaunchContext;
+
+   /** Number of failed Mesos tasks before stopping the application. -1 
means infinite. */
+   private final int maxFailedTasks;
+
+   /** Callback handler for the asynchronous Mesos scheduler */
+   private SchedulerProxy schedulerCallbackHandler;
+
+   /** Mesos scheduler driver */
+   private SchedulerDriver schedulerDriver;
+
+   private ActorRef connectionMonitor;
+
+   p

[GitHub] flink pull request #2315: [FLINK-1984] Integrate Flink with Apache Mesos (T1...

2016-08-17 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2315#discussion_r75080823
  
--- Diff: flink-dist/pom.xml ---
@@ -113,8 +113,13 @@ under the License.
flink-metrics-jmx
${project.version}

+
+   
+   org.apache.flink
+   flink-mesos_2.10
+   ${project.version}
+   
--- End diff --

We do the same for YARN. See the `include-yarn` profile.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1984) Integrate Flink with Apache Mesos

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424087#comment-15424087
 ] 

ASF GitHub Bot commented on FLINK-1984:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2315#discussion_r75080823
  
--- Diff: flink-dist/pom.xml ---
@@ -113,8 +113,13 @@ under the License.
flink-metrics-jmx
${project.version}

+
+   
+   org.apache.flink
+   flink-mesos_2.10
+   ${project.version}
+   
--- End diff --

We do the same for YARN. See the `include-yarn` profile.


> Integrate Flink with Apache Mesos
> -
>
> Key: FLINK-1984
> URL: https://issues.apache.org/jira/browse/FLINK-1984
> Project: Flink
>  Issue Type: New Feature
>  Components: Cluster Management
>Reporter: Robert Metzger
>Assignee: Eron Wright 
>Priority: Minor
> Attachments: 251.patch
>
>
> There are some users asking for an integration of Flink into Mesos.
> -There also is a pending pull request for adding Mesos support for Flink-: 
> https://github.com/apache/flink/pull/251
> Update (May '16):  a new effort is now underway, building on the recent 
> ResourceManager work.
> Design document:  ([google 
> doc|https://docs.google.com/document/d/1WItafBmGbjlaBbP8Of5PAFOH9GUJQxf5S4hjEuPchuU/edit?usp=sharing])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4408) JobSubmission

2016-08-17 Thread Xiaogang Shi (JIRA)
Xiaogang Shi created FLINK-4408:
---

 Summary: JobSubmission
 Key: FLINK-4408
 URL: https://issues.apache.org/jira/browse/FLINK-4408
 Project: Flink
  Issue Type: Sub-task
  Components: Cluster Management
Reporter: Xiaogang Shi
Assignee: Xiaogang Shi


Once granted the leadership, JM will start to execute the job.

Most code remains the same except that 
(1) In old implementation where JM manages the execution of multiple jobs, JM 
has to load all submitted JobGraphs from SubmittedJobGraphStore and recover 
them. Now that the components creating JM will be responsible for the recovery 
of JobGraphs, JM will be created with submitted/recovered JobGraph, without the 
need to load the JobGraph.

(2) JM should not rely on Akka to listen on the updates of JobStatus and 
Execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4408) Submit Job and setup ExecutionGraph

2016-08-17 Thread Xiaogang Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaogang Shi updated FLINK-4408:

Summary: Submit Job and setup ExecutionGraph  (was: JobSubmission)

> Submit Job and setup ExecutionGraph
> ---
>
> Key: FLINK-4408
> URL: https://issues.apache.org/jira/browse/FLINK-4408
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Xiaogang Shi
>Assignee: Xiaogang Shi
>
> Once granted the leadership, JM will start to execute the job.
> Most code remains the same except that 
> (1) In old implementation where JM manages the execution of multiple jobs, JM 
> has to load all submitted JobGraphs from SubmittedJobGraphStore and recover 
> them. Now that the components creating JM will be responsible for the 
> recovery of JobGraphs, JM will be created with submitted/recovered JobGraph, 
> without the need to load the JobGraph.
> (2) JM should not rely on Akka to listen on the updates of JobStatus and 
> Execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2314: [FLINK-3298] Implement ActiveMQ streaming connecto...

2016-08-17 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/2314#discussion_r75083728
  
--- Diff: 
flink-streaming-connectors/flink-connector-activemq/src/main/java/org/apache/flink/streaming/connectors/activemq/AMQSource.java
 ---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.activemq;
+
+import org.apache.activemq.ActiveMQConnectionFactory;
+import org.apache.activemq.ActiveMQSession;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase;
+import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
+import org.apache.flink.streaming.util.serialization.DeserializationSchema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.jms.BytesMessage;
+import javax.jms.Connection;
+import javax.jms.Destination;
+import javax.jms.ExceptionListener;
+import javax.jms.JMSException;
+import javax.jms.Message;
+import javax.jms.MessageConsumer;
+import javax.jms.Session;
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.List;
+
+/**
+ * Source for reading messages from an ActiveMQ queue.
+ * 
+ * To create an instance of AMQSink class one should initialize and 
configure an
+ * instance of a connection factory that will be used to create a 
connection.
+ * This source is waiting for incoming messages from ActiveMQ and converts 
them from
+ * an array of bytes into an instance of the output type. If an incoming
+ * message is not a message with an array of bytes, this message is ignored
+ * and warning message is logged.
+ *
+ * @param  type of output messages
+ */
+public class AMQSource extends MessageAcknowledgingSourceBase
+   implements ResultTypeQueryable {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(AMQSource.class);
+
+   private final ActiveMQConnectionFactory connectionFactory;
+   private final String queueName;
+   private final DeserializationSchema deserializationSchema;
+   private boolean logFailuresOnly = false;
+   private RunningChecker runningChecker;
+   private transient Connection connection;
+   private transient Session session;
+   private transient MessageConsumer consumer;
+   private boolean autoAck;
+   private HashMap unaknowledgedMessages = new 
HashMap<>();
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+*/
+   public AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deserializationSchema) {
+   this(connectionFactory, queueName, deserializationSchema, new 
RunningCheckerImpl());
+   }
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+* @param runningChecker running checker that is used to decide if the 
source is still running
+*/
+   AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deserializationSchema, RunningChecker 
runningChecker) {
+   super(String.class);
+   this.connectionFactory = connectionFactory;
+   this.queueName = queueName;
+   this.deserializationSchema = deserializationSchema;

[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424131#comment-15424131
 ] 

ASF GitHub Bot commented on FLINK-3298:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/2314#discussion_r75083728
  
--- Diff: 
flink-streaming-connectors/flink-connector-activemq/src/main/java/org/apache/flink/streaming/connectors/activemq/AMQSource.java
 ---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.activemq;
+
+import org.apache.activemq.ActiveMQConnectionFactory;
+import org.apache.activemq.ActiveMQSession;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase;
+import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
+import org.apache.flink.streaming.util.serialization.DeserializationSchema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.jms.BytesMessage;
+import javax.jms.Connection;
+import javax.jms.Destination;
+import javax.jms.ExceptionListener;
+import javax.jms.JMSException;
+import javax.jms.Message;
+import javax.jms.MessageConsumer;
+import javax.jms.Session;
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.List;
+
+/**
+ * Source for reading messages from an ActiveMQ queue.
+ * 
+ * To create an instance of AMQSink class one should initialize and 
configure an
+ * instance of a connection factory that will be used to create a 
connection.
+ * This source is waiting for incoming messages from ActiveMQ and converts 
them from
+ * an array of bytes into an instance of the output type. If an incoming
+ * message is not a message with an array of bytes, this message is ignored
+ * and warning message is logged.
+ *
+ * @param  type of output messages
+ */
+public class AMQSource extends MessageAcknowledgingSourceBase
+   implements ResultTypeQueryable {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(AMQSource.class);
+
+   private final ActiveMQConnectionFactory connectionFactory;
+   private final String queueName;
+   private final DeserializationSchema deserializationSchema;
+   private boolean logFailuresOnly = false;
+   private RunningChecker runningChecker;
+   private transient Connection connection;
+   private transient Session session;
+   private transient MessageConsumer consumer;
+   private boolean autoAck;
+   private HashMap unaknowledgedMessages = new 
HashMap<>();
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+*/
+   public AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deserializationSchema) {
+   this(connectionFactory, queueName, deserializationSchema, new 
RunningCheckerImpl());
+   }
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+* @param runningChecker running checker that is used to decide if the 
source is still running
+*/
+   AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deseriali

[GitHub] flink issue #2314: [FLINK-3298] Implement ActiveMQ streaming connector

2016-08-17 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2314
  
@mushketyk: The bahir community is currently setting up a repository for 
Flink.
I think it'll be available in the next days.
The connector is almost ready to be merged to bahir.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424140#comment-15424140
 ] 

ASF GitHub Bot commented on FLINK-3298:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2314
  
@mushketyk: The bahir community is currently setting up a repository for 
Flink.
I think it'll be available in the next days.
The connector is almost ready to be merged to bahir.


> Streaming connector for ActiveMQ
> 
>
> Key: FLINK-3298
> URL: https://issues.apache.org/jira/browse/FLINK-3298
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Mohit Sethi
>Assignee: Ivan Mushketyk
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4404) Implement Data Transfer SSL

2016-08-17 Thread Ufuk Celebi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424142#comment-15424142
 ] 

Ufuk Celebi commented on FLINK-4404:


Feel free to ping me if some input is needed on how the data transfer currently 
works.

> Implement Data Transfer SSL
> ---
>
> Key: FLINK-4404
> URL: https://issues.apache.org/jira/browse/FLINK-4404
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Suresh Krishnappa
>  Labels: security
>
> This issue is to address part T3-3 (Implement Data Transfer TLS/SSL) of the 
> [Secure Data 
> Access|https://docs.google.com/document/d/1-GQB6uVOyoaXGwtqwqLV8BHDxWiMO2WnVzBoJ8oPaAs/edit?usp=sharing]
>  design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4340) Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424162#comment-15424162
 ] 

ASF GitHub Bot commented on FLINK-4340:
---

Github user uce commented on the issue:

https://github.com/apache/flink/pull/2345
  
+1 to merge this.

On a related note: we might want to add information to the checkpoint 
statistics part of the web frontend saying whether the checkpoints were async 
or sync. Users might get confused when they see that the checkpoint times 
increased.


> Remove RocksDB Semi-Async Checkpoint Mode
> -
>
> Key: FLINK-4340
> URL: https://issues.apache.org/jira/browse/FLINK-4340
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> This seems to be causing to many problems and is also incompatible with the 
> upcoming key-group/sharding changes that will allow rescaling of keyed state.
> Once this is done we can also close FLINK-4228.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2345: [FLINK-4340] Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread uce
Github user uce commented on the issue:

https://github.com/apache/flink/pull/2345
  
+1 to merge this.

On a related note: we might want to add information to the checkpoint 
statistics part of the web frontend saying whether the checkpoints were async 
or sync. Users might get confused when they see that the checkpoint times 
increased.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2288: Feature/s3 a fix

2016-08-17 Thread uce
Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/2288#discussion_r75086582
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/util/HDFSCopyFromLocal.java
 ---
@@ -62,4 +64,34 @@ public void run() {
throw asyncException.f0;
}
}
+
+   /**
+* Ensure that target path terminates with a new directory to be 
created by fs. If remoteURI does not specify a new
+* directory, append local directory name.
+* @param fs
+* @param localPath
+* @param remoteURI
+* @return
+* @throws IOException
+*/
+   protected static URI checkInitialDirectory(final FileSystem fs,final 
File localPath, final URI remoteURI) throws IOException {
+   if (localPath.isDirectory()) {
+   Path remotePath = new Path(remoteURI);
+   if (fs.exists(remotePath)) {
+   return new 
Path(remotePath,localPath.getName()).toUri();
+   }
+   }
+   return remoteURI;
+   }
+
+   protected static void copyFromLocalFile(final FileSystem fs, final File 
localPath, final URI remotePath) throws Exception {
--- End diff --

Yes makes sense. :+1:


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2288: Feature/s3 a fix

2016-08-17 Thread uce
Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/2288#discussion_r75086698
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/util/HDFSCopyUtilitiesTest.java
 ---
@@ -70,6 +70,85 @@ public void testCopyFromLocal() throws Exception {
}
}
 
+
+   /**
+* This test verifies that nested directories are properly copied.
+*/
+   @Test
+   public void testCopyFromLocalRecursive() throws Exception {
+
+   File rootDir = tempFolder.newFolder();
+   File nestedDir = new File(rootDir,"nested");
+   nestedDir.mkdir();
+
+   Map  copyFiles = new HashMap();
+
+   copyFiles.put("1",new File(rootDir, "1"));
+   copyFiles.put("2",new File(rootDir, "2"));
+   copyFiles.put("3",new File(nestedDir, "3"));
+
+   for (File file : copyFiles.values()) {
+   try (DataOutputStream out = new DataOutputStream(new 
FileOutputStream(file))) {
+   out.writeUTF("Hello there, " + file.getName());
+   }
+   }
+   //add root and nested dirs to expected output
+   copyFiles.put(rootDir.getName(),rootDir);
+   copyFiles.put("nested",nestedDir);
+
+   assertEquals(5,copyFiles.size());
+
+   //Test for copy to unspecified target directory
+   File copyDirU = tempFolder.newFolder();
+   HDFSCopyFromLocal.copyFromLocal(
+   rootDir,
+   new Path("file://" + 
copyDirU.getAbsolutePath()).toUri());
+
+   //Test for copy to specified target directory
+   File copyDirQ = tempFolder.newFolder();
+   HDFSCopyFromLocal.copyFromLocal(
+   rootDir,
+   new Path("file://" + copyDirQ.getAbsolutePath() 
+ "/" + rootDir.getName()).toUri());
+
+   FilenameFilter noCrc = new FilenameFilter() {
--- End diff --

I see. I was wondering why there might be such files in the newly created 
directory.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4021) Problem of setting autoread for netty channel when more tasks sharing the same Tcp connection

2016-08-17 Thread Ufuk Celebi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424179#comment-15424179
 ] 

Ufuk Celebi commented on FLINK-4021:


After you address the comments (replace return and add the test) and add 
another commit to the pull request branch, I would go ahead and merge this. :-) 
If you push to the PR branch, it will be automatically reflected in the PR on 
GitHub.

> Problem of setting autoread for netty channel when more tasks sharing the 
> same Tcp connection
> -
>
> Key: FLINK-4021
> URL: https://issues.apache.org/jira/browse/FLINK-4021
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.0.2
>Reporter: Zhijiang Wang
>Assignee: Zhijiang Wang
>
> More than one task sharing the same Tcp connection for shuffling data.
> If the downstream task said as "A" has no available memory segment to read 
> netty buffer from network, it will set autoread as false for the channel.
> When the task A is failed or has available segments again, the netty handler 
> will be notified to process the staging buffers first, then reset autoread as 
> true. But in some scenarios, the autoread will not be set as true any more.
> That is when processing staging buffers, first find the corresponding input 
> channel for the buffer, if the task for that input channel is failed, the 
> decodeMsg method in PartitionRequestClientHandler will return false, that 
> means setting autoread as true will not be done anymore.
> In summary,  if one task "A" sets the autoread as false because of no 
> available segments, and resulting in some staging buffers. If another task 
> "B" is failed by accident corresponding to one staging buffer. When task A 
> trys to reset autoread as true, the process can not work because of task B 
> failed.
> I have fixed this problem in our application by adding one boolean parameter 
> in decodeBufferOrEvent method to distinguish whether this method is invoke by 
> netty IO thread channel read or staged message handler task in 
> PartitionRequestClientHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4021) Problem of setting autoread for netty channel when more tasks sharing the same Tcp connection

2016-08-17 Thread Zhijiang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424183#comment-15424183
 ] 

Zhijiang Wang commented on FLINK-4021:
--

got it, i will modify the return and add the test to PR branch this week. 

> Problem of setting autoread for netty channel when more tasks sharing the 
> same Tcp connection
> -
>
> Key: FLINK-4021
> URL: https://issues.apache.org/jira/browse/FLINK-4021
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.0.2
>Reporter: Zhijiang Wang
>Assignee: Zhijiang Wang
>
> More than one task sharing the same Tcp connection for shuffling data.
> If the downstream task said as "A" has no available memory segment to read 
> netty buffer from network, it will set autoread as false for the channel.
> When the task A is failed or has available segments again, the netty handler 
> will be notified to process the staging buffers first, then reset autoread as 
> true. But in some scenarios, the autoread will not be set as true any more.
> That is when processing staging buffers, first find the corresponding input 
> channel for the buffer, if the task for that input channel is failed, the 
> decodeMsg method in PartitionRequestClientHandler will return false, that 
> means setting autoread as true will not be done anymore.
> In summary,  if one task "A" sets the autoread as false because of no 
> available segments, and resulting in some staging buffers. If another task 
> "B" is failed by accident corresponding to one staging buffer. When task A 
> trys to reset autoread as true, the process can not work because of task B 
> failed.
> I have fixed this problem in our application by adding one boolean parameter 
> in decodeBufferOrEvent method to distinguish whether this method is invoke by 
> netty IO thread channel read or staged message handler task in 
> PartitionRequestClientHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4104) Restructure Gelly docs

2016-08-17 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi closed FLINK-4104.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in b19648e (master).

> Restructure Gelly docs
> --
>
> Key: FLINK-4104
> URL: https://issues.apache.org/jira/browse/FLINK-4104
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.2.0
>
>
> The Gelly documentation has grown sufficiently long to suggest dividing into 
> sub-pages. Leave "Using Gelly" on the main page and link to the following 
> topics as sub-pages:
> * Graph API
> * Iterative Graph Processing
> * Library Methods
> * Graph Algorithms
> * Graph Generators



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2258: [FLINK-4104] [docs] Restructure Gelly docs

2016-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2258


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4104) Restructure Gelly docs

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424193#comment-15424193
 ] 

ASF GitHub Bot commented on FLINK-4104:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2258


> Restructure Gelly docs
> --
>
> Key: FLINK-4104
> URL: https://issues.apache.org/jira/browse/FLINK-4104
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.2.0
>
>
> The Gelly documentation has grown sufficiently long to suggest dividing into 
> sub-pages. Leave "Using Gelly" on the main page and link to the following 
> topics as sub-pages:
> * Graph API
> * Iterative Graph Processing
> * Library Methods
> * Graph Algorithms
> * Graph Generators



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4409) class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar

2016-08-17 Thread Renkai Ge (JIRA)
Renkai Ge created FLINK-4409:


 Summary: class conflict between jsr305-1.3.9.jar and 
flink-shaded-hadoop2-1.1.1.jar
 Key: FLINK-4409
 URL: https://issues.apache.org/jira/browse/FLINK-4409
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.1.0
Reporter: Renkai Ge
Priority: Minor


It seems all classes in jsr305-1.3.9.jar can be found in 
flink-shaded-hadoop2-1.1.1.jar,too.
I can exclude these jars for a success assembly and run when I was using sbt
{code:scala}
libraryDependencies ++= Seq(
  "com.typesafe.play" %% "play-json" % "2.3.8",
  "org.apache.flink" %% "flink-scala" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-clients" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "joda-time" % "joda-time" % "2.9.4",
  "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
  "mysql" % "mysql-connector-java" % "5.1.15",
  "io.spray" %% "spray-caching" % "1.3.3"
)
{code}
But I think it might be better to remove jsr305 dependency from Flink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2374: [FLINK-3950] Add Meter Metric Type

2016-08-17 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2374
  
I'm not suggesting that they may be useless; my point is that we should not 
copy/use things without questioning whether they fit the bill. To that end I 
simply asked questions that came to mind :)

We now have 1, 5 and 15 minute rates. Are these enough? Do we maybe need 
sub-minute rates? Do we need an hour rate as well? Essentially, why these rates 
and not others? Could the 1 minute rate be enough for us?

For each added rate a given processing overhead is created. Is this 
overhead worth it? Should we force this overhead on every Meter user even if 
they only use a single rate? Should we maybe not hard-code different rates but 
make them configurable (somehow)?

Why do we need an *exponentially weighed* 15 minute rate if you can get a 
closer look at recent events with the 5 minute rate?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2314: [FLINK-3298] Implement ActiveMQ streaming connecto...

2016-08-17 Thread mushketyk
Github user mushketyk commented on a diff in the pull request:

https://github.com/apache/flink/pull/2314#discussion_r75091316
  
--- Diff: 
flink-streaming-connectors/flink-connector-activemq/src/main/java/org/apache/flink/streaming/connectors/activemq/AMQSource.java
 ---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.activemq;
+
+import org.apache.activemq.ActiveMQConnectionFactory;
+import org.apache.activemq.ActiveMQSession;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase;
+import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
+import org.apache.flink.streaming.util.serialization.DeserializationSchema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.jms.BytesMessage;
+import javax.jms.Connection;
+import javax.jms.Destination;
+import javax.jms.ExceptionListener;
+import javax.jms.JMSException;
+import javax.jms.Message;
+import javax.jms.MessageConsumer;
+import javax.jms.Session;
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.List;
+
+/**
+ * Source for reading messages from an ActiveMQ queue.
+ * 
+ * To create an instance of AMQSink class one should initialize and 
configure an
+ * instance of a connection factory that will be used to create a 
connection.
+ * This source is waiting for incoming messages from ActiveMQ and converts 
them from
+ * an array of bytes into an instance of the output type. If an incoming
+ * message is not a message with an array of bytes, this message is ignored
+ * and warning message is logged.
+ *
+ * @param  type of output messages
+ */
+public class AMQSource extends MessageAcknowledgingSourceBase
+   implements ResultTypeQueryable {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(AMQSource.class);
+
+   private final ActiveMQConnectionFactory connectionFactory;
+   private final String queueName;
+   private final DeserializationSchema deserializationSchema;
+   private boolean logFailuresOnly = false;
+   private RunningChecker runningChecker;
+   private transient Connection connection;
+   private transient Session session;
+   private transient MessageConsumer consumer;
+   private boolean autoAck;
+   private HashMap unaknowledgedMessages = new 
HashMap<>();
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+*/
+   public AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deserializationSchema) {
+   this(connectionFactory, queueName, deserializationSchema, new 
RunningCheckerImpl());
+   }
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+* @param runningChecker running checker that is used to decide if the 
source is still running
+*/
+   AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deserializationSchema, RunningChecker 
runningChecker) {
+   super(String.class);
+   this.connectionFactory = connectionFactory;
+   this.queueName = queueName;
+   this.deserializationSchema = deserializationSchema

[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424203#comment-15424203
 ] 

ASF GitHub Bot commented on FLINK-3298:
---

Github user mushketyk commented on a diff in the pull request:

https://github.com/apache/flink/pull/2314#discussion_r75091316
  
--- Diff: 
flink-streaming-connectors/flink-connector-activemq/src/main/java/org/apache/flink/streaming/connectors/activemq/AMQSource.java
 ---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.activemq;
+
+import org.apache.activemq.ActiveMQConnectionFactory;
+import org.apache.activemq.ActiveMQSession;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase;
+import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
+import org.apache.flink.streaming.util.serialization.DeserializationSchema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.jms.BytesMessage;
+import javax.jms.Connection;
+import javax.jms.Destination;
+import javax.jms.ExceptionListener;
+import javax.jms.JMSException;
+import javax.jms.Message;
+import javax.jms.MessageConsumer;
+import javax.jms.Session;
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.List;
+
+/**
+ * Source for reading messages from an ActiveMQ queue.
+ * 
+ * To create an instance of AMQSink class one should initialize and 
configure an
+ * instance of a connection factory that will be used to create a 
connection.
+ * This source is waiting for incoming messages from ActiveMQ and converts 
them from
+ * an array of bytes into an instance of the output type. If an incoming
+ * message is not a message with an array of bytes, this message is ignored
+ * and warning message is logged.
+ *
+ * @param  type of output messages
+ */
+public class AMQSource extends MessageAcknowledgingSourceBase
+   implements ResultTypeQueryable {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(AMQSource.class);
+
+   private final ActiveMQConnectionFactory connectionFactory;
+   private final String queueName;
+   private final DeserializationSchema deserializationSchema;
+   private boolean logFailuresOnly = false;
+   private RunningChecker runningChecker;
+   private transient Connection connection;
+   private transient Session session;
+   private transient MessageConsumer consumer;
+   private boolean autoAck;
+   private HashMap unaknowledgedMessages = new 
HashMap<>();
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+*/
+   public AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deserializationSchema) {
+   this(connectionFactory, queueName, deserializationSchema, new 
RunningCheckerImpl());
+   }
+
+   /**
+* Create AMQSource.
+*
+* @param connectionFactory factory that will be used to create a 
connection with ActiveMQ
+* @param queueName name of an ActiveMQ queue to read from
+* @param deserializationSchema schema to deserialize incoming messages
+* @param runningChecker running checker that is used to decide if the 
source is still running
+*/
+   AMQSource(ActiveMQConnectionFactory connectionFactory, String 
queueName, DeserializationSchema deserial

[GitHub] flink issue #2314: [FLINK-3298] Implement ActiveMQ streaming connector

2016-08-17 Thread mushketyk
Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2314
  
@rmetzger About the "job jar". Could you suggest how to check this? Is 
there anything I should do except performing "mvn clean install"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424207#comment-15424207
 ] 

ASF GitHub Bot commented on FLINK-3298:
---

Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2314
  
@rmetzger About the "job jar". Could you suggest how to check this? Is 
there anything I should do except performing "mvn clean install"?


> Streaming connector for ActiveMQ
> 
>
> Key: FLINK-3298
> URL: https://issues.apache.org/jira/browse/FLINK-3298
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Mohit Sethi
>Assignee: Ivan Mushketyk
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424209#comment-15424209
 ] 

ASF GitHub Bot commented on FLINK-3298:
---

Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2314
  
@rmetzger About moving to bahir. Would you (or someone else) merge this 
into Apache Flink and them move this code with other connectors, or should I 
create another PR with the AMQ connector for Bahir?


> Streaming connector for ActiveMQ
> 
>
> Key: FLINK-3298
> URL: https://issues.apache.org/jira/browse/FLINK-3298
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Mohit Sethi
>Assignee: Ivan Mushketyk
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2314: [FLINK-3298] Implement ActiveMQ streaming connector

2016-08-17 Thread mushketyk
Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2314
  
@rmetzger About moving to bahir. Would you (or someone else) merge this 
into Apache Flink and them move this code with other connectors, or should I 
create another PR with the AMQ connector for Bahir?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2345: [FLINK-4340] Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2345
  
How about having two times: The "synchronous" component and the 
"asynchronous" component of the time. Plus probably a total for those that 
don't want to sum them up themselves.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4340) Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424214#comment-15424214
 ] 

ASF GitHub Bot commented on FLINK-4340:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2345
  
How about having two times: The "synchronous" component and the 
"asynchronous" component of the time. Plus probably a total for those that 
don't want to sum them up themselves.


> Remove RocksDB Semi-Async Checkpoint Mode
> -
>
> Key: FLINK-4340
> URL: https://issues.apache.org/jira/browse/FLINK-4340
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> This seems to be causing to many problems and is also incompatible with the 
> upcoming key-group/sharding changes that will allow rescaling of keyed state.
> Once this is done we can also close FLINK-4228.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424219#comment-15424219
 ] 

ASF GitHub Bot commented on FLINK-3298:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2314
  
Lets wait for the Bahir GitHub repo being opened and then open a PR there.
I can't merge it there myself, because I'm not a committer, but I'm sure 
they'll merge it once I've done a final review.

Regarding the "job jar" test:
Create a new flink job using the quickstart script or maven archetype.
add the activemq connector as a dependency and write a little test job
build the flink job using maven
start a local flink cluster and submit the jar from the flink job to it.
Check that the job is working correctly.


> Streaming connector for ActiveMQ
> 
>
> Key: FLINK-3298
> URL: https://issues.apache.org/jira/browse/FLINK-3298
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Mohit Sethi
>Assignee: Ivan Mushketyk
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2314: [FLINK-3298] Implement ActiveMQ streaming connector

2016-08-17 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2314
  
Lets wait for the Bahir GitHub repo being opened and then open a PR there.
I can't merge it there myself, because I'm not a committer, but I'm sure 
they'll merge it once I've done a final review.

Regarding the "job jar" test:
Create a new flink job using the quickstart script or maven archetype.
add the activemq connector as a dependency and write a little test job
build the flink job using maven
start a local flink cluster and submit the jar from the flink job to it.
Check that the job is working correctly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-4409) class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar

2016-08-17 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-4409:
---

Assignee: Stephan Ewen

> class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar
> --
>
> Key: FLINK-4409
> URL: https://issues.apache.org/jira/browse/FLINK-4409
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Renkai Ge
>Assignee: Stephan Ewen
>Priority: Minor
>
> It seems all classes in jsr305-1.3.9.jar can be found in 
> flink-shaded-hadoop2-1.1.1.jar,too.
> I can exclude these jars for a success assembly and run when I was using sbt
> {code:none}
> libraryDependencies ++= Seq(
>   "com.typesafe.play" %% "play-json" % "2.3.8",
>   "org.apache.flink" %% "flink-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-clients" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "joda-time" % "joda-time" % "2.9.4",
>   "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
>   "mysql" % "mysql-connector-java" % "5.1.15",
>   "io.spray" %% "spray-caching" % "1.3.3"
> )
> {code}
> But I think it might be better to remove jsr305 dependency from Flink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4409) class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar

2016-08-17 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424220#comment-15424220
 ] 

Stephan Ewen commented on FLINK-4409:
-

We want to make Flink independent of Hadoop, so I would exclude them from the 
Hadoop2 dependency.

> class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar
> --
>
> Key: FLINK-4409
> URL: https://issues.apache.org/jira/browse/FLINK-4409
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Renkai Ge
>Priority: Minor
>
> It seems all classes in jsr305-1.3.9.jar can be found in 
> flink-shaded-hadoop2-1.1.1.jar,too.
> I can exclude these jars for a success assembly and run when I was using sbt
> {code:scala}
> libraryDependencies ++= Seq(
>   "com.typesafe.play" %% "play-json" % "2.3.8",
>   "org.apache.flink" %% "flink-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-clients" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "joda-time" % "joda-time" % "2.9.4",
>   "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
>   "mysql" % "mysql-connector-java" % "5.1.15",
>   "io.spray" %% "spray-caching" % "1.3.3"
> )
> {code}
> But I think it might be better to remove jsr305 dependency from Flink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4409) class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar

2016-08-17 Thread Renkai Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renkai Ge updated FLINK-4409:
-
Description: 
It seems all classes in jsr305-1.3.9.jar can be found in 
flink-shaded-hadoop2-1.1.1.jar,too.
I can exclude these jars for a success assembly and run when I was using sbt
{code:none}
libraryDependencies ++= Seq(
  "com.typesafe.play" %% "play-json" % "2.3.8",
  "org.apache.flink" %% "flink-scala" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-clients" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "joda-time" % "joda-time" % "2.9.4",
  "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
  "mysql" % "mysql-connector-java" % "5.1.15",
  "io.spray" %% "spray-caching" % "1.3.3"
)
{code}
But I think it might be better to remove jsr305 dependency from Flink.

  was:
It seems all classes in jsr305-1.3.9.jar can be found in 
flink-shaded-hadoop2-1.1.1.jar,too.
I can exclude these jars for a success assembly and run when I was using sbt
{code:scala}
libraryDependencies ++= Seq(
  "com.typesafe.play" %% "play-json" % "2.3.8",
  "org.apache.flink" %% "flink-scala" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-clients" % "1.1.1"
exclude("com.google.code.findbugs", "jsr305"),
  "joda-time" % "joda-time" % "2.9.4",
  "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
  "mysql" % "mysql-connector-java" % "5.1.15",
  "io.spray" %% "spray-caching" % "1.3.3"
)
{code}
But I think it might be better to remove jsr305 dependency from Flink.


> class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar
> --
>
> Key: FLINK-4409
> URL: https://issues.apache.org/jira/browse/FLINK-4409
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Renkai Ge
>Priority: Minor
>
> It seems all classes in jsr305-1.3.9.jar can be found in 
> flink-shaded-hadoop2-1.1.1.jar,too.
> I can exclude these jars for a success assembly and run when I was using sbt
> {code:none}
> libraryDependencies ++= Seq(
>   "com.typesafe.play" %% "play-json" % "2.3.8",
>   "org.apache.flink" %% "flink-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-clients" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "joda-time" % "joda-time" % "2.9.4",
>   "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
>   "mysql" % "mysql-connector-java" % "5.1.15",
>   "io.spray" %% "spray-caching" % "1.3.3"
> )
> {code}
> But I think it might be better to remove jsr305 dependency from Flink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4410) Split checkpoint times into synchronous and asynchronous part

2016-08-17 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-4410:
--

 Summary: Split checkpoint times into synchronous and asynchronous 
part
 Key: FLINK-4410
 URL: https://issues.apache.org/jira/browse/FLINK-4410
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Reporter: Ufuk Celebi
Priority: Minor


Checkpoint statistics contain the duration of a checkpoint. We should split 
this time into the synchronous and asynchronous part. This will give more 
insight into the inner workings of the checkpointing mechanism and help users 
better understand what's going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #:

2016-08-17 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:


https://github.com/apache/flink/commit/68addf39e0e5f9e1656818f923be362680ed93b0#commitcomment-18669730
  
Really nice abstraction @StephanEwen 👍 . I like it a lot :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2378: [FLINK-4409] [build] Exclude JSR 305 from Hadoop d...

2016-08-17 Thread StephanEwen
GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/2378

[FLINK-4409] [build] Exclude JSR 305 from Hadoop dependencies

The JSR 305 classes (`javax.annotation`) are already in Flink's core 
dependencies.

I verified that after this patch, the classes are no longer in the Hadoop 
jar files.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink jsr_hadoop

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2378.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2378


commit 3f1e1f92f2138877a4e1e0a187638cdb30de0865
Author: Stephan Ewen 
Date:   2016-08-17T09:53:57Z

[FLINK-4409] [build] Exclude JSR 305 from Hadoop dependencies




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2378: [FLINK-4409] [build] Exclude JSR 305 from Hadoop dependen...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2378
  
@rmetzger If you are okay with this, I'd merge it to 1.2.0 and 1.1.2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4363) Implement TaskManager basic startup of all components in java

2016-08-17 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424236#comment-15424236
 ] 

Till Rohrmann commented on FLINK-4363:
--

I think you can already start with implementing the start-up of the 
TaskExecutor. I think you can follow a similar approach as we're doing it 
currently. The important thing in my opinion is to give the different 
TaskExecutor components directly to the constructor instead of initializing 
them in the constructor. This will make the testing easier. But we could offer 
a factory method which does the component construction and then instantiates 
the TaskExecutor.  

> Implement TaskManager basic startup of all components in java
> -
>
> Key: FLINK-4363
> URL: https://issues.apache.org/jira/browse/FLINK-4363
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Zhijiang Wang
>
> Similar with current {{TaskManager}},but implement initialization and startup 
> all components in java instead of scala.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4409) class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424234#comment-15424234
 ] 

ASF GitHub Bot commented on FLINK-4409:
---

GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/2378

[FLINK-4409] [build] Exclude JSR 305 from Hadoop dependencies

The JSR 305 classes (`javax.annotation`) are already in Flink's core 
dependencies.

I verified that after this patch, the classes are no longer in the Hadoop 
jar files.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink jsr_hadoop

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2378.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2378


commit 3f1e1f92f2138877a4e1e0a187638cdb30de0865
Author: Stephan Ewen 
Date:   2016-08-17T09:53:57Z

[FLINK-4409] [build] Exclude JSR 305 from Hadoop dependencies




> class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar
> --
>
> Key: FLINK-4409
> URL: https://issues.apache.org/jira/browse/FLINK-4409
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Renkai Ge
>Assignee: Stephan Ewen
>Priority: Minor
>
> It seems all classes in jsr305-1.3.9.jar can be found in 
> flink-shaded-hadoop2-1.1.1.jar,too.
> I can exclude these jars for a success assembly and run when I was using sbt
> {code:none}
> libraryDependencies ++= Seq(
>   "com.typesafe.play" %% "play-json" % "2.3.8",
>   "org.apache.flink" %% "flink-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-clients" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "joda-time" % "joda-time" % "2.9.4",
>   "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
>   "mysql" % "mysql-connector-java" % "5.1.15",
>   "io.spray" %% "spray-caching" % "1.3.3"
> )
> {code}
> But I think it might be better to remove jsr305 dependency from Flink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4409) class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424235#comment-15424235
 ] 

ASF GitHub Bot commented on FLINK-4409:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2378
  
@rmetzger If you are okay with this, I'd merge it to 1.2.0 and 1.1.2


> class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar
> --
>
> Key: FLINK-4409
> URL: https://issues.apache.org/jira/browse/FLINK-4409
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Renkai Ge
>Assignee: Stephan Ewen
>Priority: Minor
>
> It seems all classes in jsr305-1.3.9.jar can be found in 
> flink-shaded-hadoop2-1.1.1.jar,too.
> I can exclude these jars for a success assembly and run when I was using sbt
> {code:none}
> libraryDependencies ++= Seq(
>   "com.typesafe.play" %% "play-json" % "2.3.8",
>   "org.apache.flink" %% "flink-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-clients" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "joda-time" % "joda-time" % "2.9.4",
>   "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
>   "mysql" % "mysql-connector-java" % "5.1.15",
>   "io.spray" %% "spray-caching" % "1.3.3"
> )
> {code}
> But I think it might be better to remove jsr305 dependency from Flink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2360: [FLINK-4384] [rpc] Add "scheduleRunAsync()" to the...

2016-08-17 Thread StephanEwen
Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/2360


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2360: [FLINK-4384] [rpc] Add "scheduleRunAsync()" to the RpcEnd...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2360
  
Manually merged into the `flip-6` branch in 
f74f44bb56e50d1714e935df2980a6c8213faf89


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4384) Add a "scheduleRunAsync()" feature to the RpcEndpoint

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424237#comment-15424237
 ] 

ASF GitHub Bot commented on FLINK-4384:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2360
  
Manually merged into the `flip-6` branch in 
f74f44bb56e50d1714e935df2980a6c8213faf89


> Add a "scheduleRunAsync()" feature to the RpcEndpoint
> -
>
> Key: FLINK-4384
> URL: https://issues.apache.org/jira/browse/FLINK-4384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
> Environment: FLIP-6 feature branch
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> It is a common pattern to schedule a call to be executed in the future. 
> Examples are
>   - delays in retries
>   - heartbeats,
>   - checking for heartbeat timeouts
> I suggest to add a {{scheduleRunAsync()}} method to the {{RpcEndpoint}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4384) Add a "scheduleRunAsync()" feature to the RpcEndpoint

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424238#comment-15424238
 ] 

ASF GitHub Bot commented on FLINK-4384:
---

Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/2360


> Add a "scheduleRunAsync()" feature to the RpcEndpoint
> -
>
> Key: FLINK-4384
> URL: https://issues.apache.org/jira/browse/FLINK-4384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
> Environment: FLIP-6 feature branch
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> It is a common pattern to schedule a call to be executed in the future. 
> Examples are
>   - delays in retries
>   - heartbeats,
>   - checking for heartbeat timeouts
> I suggest to add a {{scheduleRunAsync()}} method to the {{RpcEndpoint}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2216: [FLINK-4173] Use shade-plugin in flink-metrics

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2216
  
Could you recap why do we needs to build fat jars here at all? Simply 
having the transitive dependencies does not work?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4173) Replace maven-assembly-plugin by maven-shade-plugin in flink-metrics

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424240#comment-15424240
 ] 

ASF GitHub Bot commented on FLINK-4173:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2216
  
Could you recap why do we needs to build fat jars here at all? Simply 
having the transitive dependencies does not work?


> Replace maven-assembly-plugin by maven-shade-plugin in flink-metrics
> 
>
> Key: FLINK-4173
> URL: https://issues.apache.org/jira/browse/FLINK-4173
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> The modules {{flink-metrics-dropwizard}}, {{flink-metrics-ganglia}} and 
> {{flink-metrics-graphite}} use the {{maven-assembly-plugin}} to build a fat 
> jar. The resulting fat jar has the suffix {{jar-with-dependencies}}. In order 
> to make the naming consistent with the rest of the system we should create a 
> fat-jar without this suffix.
> Additionally we could replace the {{maven-assembly-plugin}} with the 
> {{maven-shade-plugin}} to make it consistent with the rest of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2345: [FLINK-4340] Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2345
  
@StephanEwen That would probably be good, on a side note, time is 
calculated in the `CheckpointCoordinator` as ` - ` right now.

This is somewhat confusing because it does not give the actual time it took 
to perform the checkpoint but the time since the whole global process was 
started at the checkpoint coordinator. Maybe the task has to send back some 
meta-data along with the actual checkpoint data.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4340) Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424244#comment-15424244
 ] 

ASF GitHub Bot commented on FLINK-4340:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2345
  
@StephanEwen That would probably be good, on a side note, time is 
calculated in the `CheckpointCoordinator` as ` - ` right now.

This is somewhat confusing because it does not give the actual time it took 
to perform the checkpoint but the time since the whole global process was 
started at the checkpoint coordinator. Maybe the task has to send back some 
meta-data along with the actual checkpoint data.


> Remove RocksDB Semi-Async Checkpoint Mode
> -
>
> Key: FLINK-4340
> URL: https://issues.apache.org/jira/browse/FLINK-4340
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> This seems to be causing to many problems and is also incompatible with the 
> upcoming key-group/sharding changes that will allow rescaling of keyed state.
> Once this is done we can also close FLINK-4228.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2345: [FLINK-4340] Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2345
  
We can keep calculating the total time in the checkpoint coordinator. That 
way, it included message roundtrips.

The state handles (or whatever is the successor to them) should have the 
sync/async time components.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4340) Remove RocksDB Semi-Async Checkpoint Mode

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424247#comment-15424247
 ] 

ASF GitHub Bot commented on FLINK-4340:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2345
  
We can keep calculating the total time in the checkpoint coordinator. That 
way, it included message roundtrips.

The state handles (or whatever is the successor to them) should have the 
sync/async time components.


> Remove RocksDB Semi-Async Checkpoint Mode
> -
>
> Key: FLINK-4340
> URL: https://issues.apache.org/jira/browse/FLINK-4340
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> This seems to be causing to many problems and is also incompatible with the 
> upcoming key-group/sharding changes that will allow rescaling of keyed state.
> Once this is done we can also close FLINK-4228.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2302: [hotfix] [metrics] Refactor constructors and tests

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2302
  
Looks good to merge from my side...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4411) [py] Chained dual input children are not properly propagated

2016-08-17 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-4411:
---

 Summary: [py] Chained dual input children are not properly 
propagated
 Key: FLINK-4411
 URL: https://issues.apache.org/jira/browse/FLINK-4411
 Project: Flink
  Issue Type: Bug
  Components: Python API
Affects Versions: 1.1.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Minor
 Fix For: 1.2.0, 1.1.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2337: [FLINK-3042] [FLINK-3060] [types] Define a way to ...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2337#discussion_r75097072
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
 ---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import java.lang.reflect.Type;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.functions.MapFunction;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.BOOLEAN_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.DOUBLE_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.FLOAT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.INT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.STRING_TYPE_INFO;
+import org.apache.flink.api.common.typeinfo.TypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import org.junit.Test;
+
+/**
+ * Tests for extracting {@link 
org.apache.flink.api.common.typeinfo.TypeInformation} from types
+ * using a {@link org.apache.flink.api.common.typeinfo.TypeInfoFactory}
+ */
+@SuppressWarnings({"unchecked", "rawtypes"})
+public class TypeInfoFactoryTest {
+
+   @Test
+   public void testSimpleType() {
+   TypeInformation ti = 
TypeExtractor.createTypeInfo(IntLike.class);
--- End diff --

I would prefer to not use raw types unless really necessary. Can't you use 
`TypeInformation`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2337: [FLINK-3042] [FLINK-3060] [types] Define a way to ...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2337#discussion_r75097127
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
 ---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import java.lang.reflect.Type;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.functions.MapFunction;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.BOOLEAN_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.DOUBLE_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.FLOAT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.INT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.STRING_TYPE_INFO;
+import org.apache.flink.api.common.typeinfo.TypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import org.junit.Test;
+
+/**
+ * Tests for extracting {@link 
org.apache.flink.api.common.typeinfo.TypeInformation} from types
+ * using a {@link org.apache.flink.api.common.typeinfo.TypeInfoFactory}
+ */
+@SuppressWarnings({"unchecked", "rawtypes"})
+public class TypeInfoFactoryTest {
+
+   @Test
+   public void testSimpleType() {
+   TypeInformation ti = 
TypeExtractor.createTypeInfo(IntLike.class);
+   assertEquals(INT_TYPE_INFO, ti);
+
+   ti = TypeExtractor.getForClass(IntLike.class);
+   assertEquals(INT_TYPE_INFO, ti);
+
+   ti = TypeExtractor.getForObject(new IntLike());
+   assertEquals(INT_TYPE_INFO, ti);
+   }
+
+   @Test
+   public void testMyEitherGenericType() {
+   MapFunction f = new MyEitherMapper();
+   TypeInformation ti = TypeExtractor.getMapReturnTypes(f, 
(TypeInformation) BOOLEAN_TYPE_INFO);
+   assertTrue(ti instanceof EitherTypeInfo);
+   EitherTypeInfo eti = (EitherTypeInfo) ti;
+   assertEquals(BOOLEAN_TYPE_INFO, eti.getLeftType());
+   assertEquals(STRING_TYPE_INFO, eti.getRightType());
+   }
+
+   @Test
+   public void testMyOptionGenericType() {
+   TypeInformation inTypeInfo = new MyOptionTypeInfo(new 
TupleTypeInfo(BOOLEAN_TYPE_INFO, STRING_TYPE_INFO));
+   MapFunction f = new MyOptionMapper();
+   TypeInformation ti = TypeExtractor.getMapReturnTypes(f, 
inTypeInfo);
+   assertTrue(ti instanceof MyOptionTypeInfo);
+   MyOptionTypeInfo oti = (MyOptionTypeInfo) ti;
+   assertTrue(oti.getInnerType() instanceof TupleTypeInfo);
+   TupleTypeInfo tti = (TupleTypeInfo) oti.getInnerType();
+   assertEquals(BOOLEAN_TYPE_INFO, tti.getTypeAt(0));
+   assertEquals(BOOLEAN_TYPE_INFO, tti.getTypeAt(1));
+   }
+
+   @Test
+   public void testMyTuple2() {
+   TypeInformation inTypeInfo = new TupleTypeInfo(new 
MyTupleTypeInfo(DOUBLE_TYPE_INFO, STRING_TYPE_INFO));
+   MapFunction f = new MyTupleMapperL2();
+   TypeInformation ti = TypeExtractor.getMapReturnTypes(f, 
inTypeInfo);
+   assertTrue(ti instanceof TupleTypeInfo);
+   TupleTypeInfo tti = (TupleTypeInfo) ti;
+   assertTrue(tti.getTypeAt(0) instanceof MyTupleTypeInfo);
+   MyTupleTypeInfo mtti = (MyTupleTypeInfo) tti.getTypeAt(0);
+   

[jira] [Commented] (FLINK-3042) Define a way to let types create their own TypeInformation

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424257#comment-15424257
 ] 

ASF GitHub Bot commented on FLINK-3042:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2337#discussion_r75097127
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
 ---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import java.lang.reflect.Type;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.functions.MapFunction;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.BOOLEAN_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.DOUBLE_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.FLOAT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.INT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.STRING_TYPE_INFO;
+import org.apache.flink.api.common.typeinfo.TypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import org.junit.Test;
+
+/**
+ * Tests for extracting {@link 
org.apache.flink.api.common.typeinfo.TypeInformation} from types
+ * using a {@link org.apache.flink.api.common.typeinfo.TypeInfoFactory}
+ */
+@SuppressWarnings({"unchecked", "rawtypes"})
+public class TypeInfoFactoryTest {
+
+   @Test
+   public void testSimpleType() {
+   TypeInformation ti = 
TypeExtractor.createTypeInfo(IntLike.class);
+   assertEquals(INT_TYPE_INFO, ti);
+
+   ti = TypeExtractor.getForClass(IntLike.class);
+   assertEquals(INT_TYPE_INFO, ti);
+
+   ti = TypeExtractor.getForObject(new IntLike());
+   assertEquals(INT_TYPE_INFO, ti);
+   }
+
+   @Test
+   public void testMyEitherGenericType() {
+   MapFunction f = new MyEitherMapper();
+   TypeInformation ti = TypeExtractor.getMapReturnTypes(f, 
(TypeInformation) BOOLEAN_TYPE_INFO);
+   assertTrue(ti instanceof EitherTypeInfo);
+   EitherTypeInfo eti = (EitherTypeInfo) ti;
+   assertEquals(BOOLEAN_TYPE_INFO, eti.getLeftType());
+   assertEquals(STRING_TYPE_INFO, eti.getRightType());
+   }
+
+   @Test
+   public void testMyOptionGenericType() {
+   TypeInformation inTypeInfo = new MyOptionTypeInfo(new 
TupleTypeInfo(BOOLEAN_TYPE_INFO, STRING_TYPE_INFO));
+   MapFunction f = new MyOptionMapper();
+   TypeInformation ti = TypeExtractor.getMapReturnTypes(f, 
inTypeInfo);
+   assertTrue(ti instanceof MyOptionTypeInfo);
+   MyOptionTypeInfo oti = (MyOptionTypeInfo) ti;
+   assertTrue(oti.getInnerType() instanceof TupleTypeInfo);
+   TupleTypeInfo tti = (TupleTypeInfo) oti.getInnerType();
+   assertEquals(BOOLEAN_TYPE_INFO, tti.getTypeAt(0));
+   assertEquals(BOOLEAN_TYPE_INFO, tti.getTypeAt(1));
+   }
+
+   @Test
+   public void testMyTuple2() {
+   TypeInformation inTypeInfo = new TupleTypeInfo(new 
MyTupleTypeInfo(DOUBLE_TYPE_INFO, STRING_TYPE_INFO));
+   MapFunction f = new MyTupleMapperL2();
+   TypeInformation ti = TypeExtractor.getMapReturnTypes(f, 
inTypeInfo);
+   asser

[jira] [Commented] (FLINK-3042) Define a way to let types create their own TypeInformation

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424256#comment-15424256
 ] 

ASF GitHub Bot commented on FLINK-3042:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2337#discussion_r75097072
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
 ---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import java.lang.reflect.Type;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.functions.MapFunction;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.BOOLEAN_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.DOUBLE_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.FLOAT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.INT_TYPE_INFO;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.STRING_TYPE_INFO;
+import org.apache.flink.api.common.typeinfo.TypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple1;
+import org.apache.flink.api.java.tuple.Tuple2;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import org.junit.Test;
+
+/**
+ * Tests for extracting {@link 
org.apache.flink.api.common.typeinfo.TypeInformation} from types
+ * using a {@link org.apache.flink.api.common.typeinfo.TypeInfoFactory}
+ */
+@SuppressWarnings({"unchecked", "rawtypes"})
+public class TypeInfoFactoryTest {
+
+   @Test
+   public void testSimpleType() {
+   TypeInformation ti = 
TypeExtractor.createTypeInfo(IntLike.class);
--- End diff --

I would prefer to not use raw types unless really necessary. Can't you use 
`TypeInformation`?


> Define a way to let types create their own TypeInformation
> --
>
> Key: FLINK-3042
> URL: https://issues.apache.org/jira/browse/FLINK-3042
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Timo Walther
> Fix For: 1.0.0
>
>
> Currently, introducing new Types that should have specific TypeInformation 
> requires
>   - Either integration with the TypeExtractor
>   - Or manually constructing the TypeInformation (potentially at every place) 
> and using type hints everywhere.
> I propose to add a way to allow classes to create their own TypeInformation 
> (like a static method "createTypeInfo()").
> To support generic nested types (like Optional / Either), the type extractor 
> would provide a Map of what generic variables map to what types (deduced from 
> the input). The class can use that to create the correct nested 
> TypeInformation (possibly by calling the TypeExtractor again, passing the Map 
> of generic bindings).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4363) Implement TaskManager basic startup of all components in java

2016-08-17 Thread Zhijiang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang Wang reassigned FLINK-4363:


Assignee: Zhijiang Wang

> Implement TaskManager basic startup of all components in java
> -
>
> Key: FLINK-4363
> URL: https://issues.apache.org/jira/browse/FLINK-4363
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Zhijiang Wang
>Assignee: Zhijiang Wang
>
> Similar with current {{TaskManager}},but implement initialization and startup 
> all components in java instead of scala.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4407) Implement the trigger DSL

2016-08-17 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-4407:

Issue Type: Sub-task  (was: Bug)
Parent: FLINK-3643

> Implement the trigger DSL
> -
>
> Key: FLINK-4407
> URL: https://issues.apache.org/jira/browse/FLINK-4407
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>
> This issue refers to the implementation of the trigger DSL.
> The specification of the DSL has an open FLIP here:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-9%3A+Trigger+DSL
> And is currently under discussion in the dev@ mailing list here:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-9-Trigger-DSL-td13065.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2337: [FLINK-3042] [FLINK-3060] [types] Define a way to ...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2337#discussion_r75097330
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
 ---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import java.lang.reflect.Type;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.functions.MapFunction;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.BOOLEAN_TYPE_INFO;
--- End diff --

Minor nitpick: Most code styles (through not enforced here) put all static 
imports below the regular imports.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3042) Define a way to let types create their own TypeInformation

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424260#comment-15424260
 ] 

ASF GitHub Bot commented on FLINK-3042:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2337#discussion_r75097330
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
 ---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import java.lang.reflect.Type;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.functions.MapFunction;
+import static 
org.apache.flink.api.common.typeinfo.BasicTypeInfo.BOOLEAN_TYPE_INFO;
--- End diff --

Minor nitpick: Most code styles (through not enforced here) put all static 
imports below the regular imports.


> Define a way to let types create their own TypeInformation
> --
>
> Key: FLINK-3042
> URL: https://issues.apache.org/jira/browse/FLINK-3042
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Timo Walther
> Fix For: 1.0.0
>
>
> Currently, introducing new Types that should have specific TypeInformation 
> requires
>   - Either integration with the TypeExtractor
>   - Or manually constructing the TypeInformation (potentially at every place) 
> and using type hints everywhere.
> I propose to add a way to allow classes to create their own TypeInformation 
> (like a static method "createTypeInfo()").
> To support generic nested types (like Optional / Either), the type extractor 
> would provide a Map of what generic variables map to what types (deduced from 
> the input). The class can use that to create the correct nested 
> TypeInformation (possibly by calling the TypeExtractor again, passing the Map 
> of generic bindings).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2216: [FLINK-4173] Use shade-plugin in flink-metrics

2016-08-17 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2216
  
It would work but is more work for the user and is less prone to version 
mismatches. Not using a fat-jar means that the users have to provide the 
dependencies and put them in the `/libs` folder themselves. For ganglia for 
example this would mean copying 5 jars. And there is no guarantee that the 
version against which the reporter was compiled is identical to the one 
provided at runtime.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4173) Replace maven-assembly-plugin by maven-shade-plugin in flink-metrics

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424263#comment-15424263
 ] 

ASF GitHub Bot commented on FLINK-4173:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2216
  
It would work but is more work for the user and is less prone to version 
mismatches. Not using a fat-jar means that the users have to provide the 
dependencies and put them in the `/libs` folder themselves. For ganglia for 
example this would mean copying 5 jars. And there is no guarantee that the 
version against which the reporter was compiled is identical to the one 
provided at runtime.


> Replace maven-assembly-plugin by maven-shade-plugin in flink-metrics
> 
>
> Key: FLINK-4173
> URL: https://issues.apache.org/jira/browse/FLINK-4173
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> The modules {{flink-metrics-dropwizard}}, {{flink-metrics-ganglia}} and 
> {{flink-metrics-graphite}} use the {{maven-assembly-plugin}} to build a fat 
> jar. The resulting fat jar has the suffix {{jar-with-dependencies}}. In order 
> to make the naming consistent with the rest of the system we should create a 
> fat-jar without this suffix.
> Additionally we could replace the {{maven-assembly-plugin}} with the 
> {{maven-shade-plugin}} to make it consistent with the rest of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2337: [FLINK-3042] [FLINK-3060] [types] Define a way to let typ...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2337
  
Looks quite good all in all. Especially since we had quite thorough tests 
in the type extractor before and they still pass.

The only thing this needs is a few more Scala API tests, in my opinion. 
Especially around "case classes" and generic case classes. To be sure the 
interoperability works well there, too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2300: [FLINK-4245] [metrics] Expose all defined variables

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2300
  
I am taking a look at this now...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---



[jira] [Commented] (FLINK-4245) Metric naming improvements

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424266#comment-15424266
 ] 

ASF GitHub Bot commented on FLINK-4245:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2300
  
I am taking a look at this now...


> Metric naming improvements
> --
>
> Key: FLINK-4245
> URL: https://issues.apache.org/jira/browse/FLINK-4245
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Reporter: Stephan Ewen
>
> A metric currently has two parts to it:
>   - The name of that particular metric
>   - The "scope" (or namespace), defined by the group that contains the metric.
> A metric group actually always implicitly has a map of naming "tags", like:
>   - taskmanager_host : 
>   - taskmanager_id : 
>   - task_name : "map() -> filter()"
> We derive the scope from that map, following the defined scope formats.
> For JMX (and some users that use JMX), it would be natural to expose that map 
> of tags. Some users reconstruct that map by parsing the metric scope. JMX, we 
> can expose a metric like:
>   - domain: "taskmanager.task.operator.io"
>   - name: "numRecordsIn"
>   - tags: { "hostname" -> "localhost", "operator_name" -> "map() at 
> X.java:123", ... }
> For many other reporters, the formatted scope makes a lot of sense, since 
> they think only in terms of (scope, metric-name).
> We may even have the formatted scope in JMX as well (in the domain), if we 
> want to go that route. 
> [~jgrier] and [~Zentol] - what do you think about that?
> [~mdaxini] Does that match your use of the metrics?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3042) Define a way to let types create their own TypeInformation

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424265#comment-15424265
 ] 

ASF GitHub Bot commented on FLINK-3042:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2337
  
Looks quite good all in all. Especially since we had quite thorough tests 
in the type extractor before and they still pass.

The only thing this needs is a few more Scala API tests, in my opinion. 
Especially around "case classes" and generic case classes. To be sure the 
interoperability works well there, too.


> Define a way to let types create their own TypeInformation
> --
>
> Key: FLINK-3042
> URL: https://issues.apache.org/jira/browse/FLINK-3042
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Timo Walther
> Fix For: 1.0.0
>
>
> Currently, introducing new Types that should have specific TypeInformation 
> requires
>   - Either integration with the TypeExtractor
>   - Or manually constructing the TypeInformation (potentially at every place) 
> and using type hints everywhere.
> I propose to add a way to allow classes to create their own TypeInformation 
> (like a static method "createTypeInfo()").
> To support generic nested types (like Optional / Either), the type extractor 
> would provide a Map of what generic variables map to what types (deduced from 
> the input). The class can use that to create the correct nested 
> TypeInformation (possibly by calling the TypeExtractor again, passing the Map 
> of generic bindings).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2353: [FLINK-4355] [cluster management] Implement TaskManager s...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2353
  
Manually merged into the `flip-6` feature branch in 
https://github.com/apache/flink/commit/68addf39e0e5f9e1656818f923be362680ed93b0


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2353: [FLINK-4355] [cluster management] Implement TaskMa...

2016-08-17 Thread StephanEwen
Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/2353


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4355) Implement TaskManager side of registration at ResourceManager

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424272#comment-15424272
 ] 

ASF GitHub Bot commented on FLINK-4355:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2353
  
Manually merged into the `flip-6` feature branch in 
https://github.com/apache/flink/commit/68addf39e0e5f9e1656818f923be362680ed93b0


> Implement TaskManager side of registration at ResourceManager
> -
>
> Key: FLINK-4355
> URL: https://issues.apache.org/jira/browse/FLINK-4355
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Zhijiang Wang
>Assignee: Stephan Ewen
>
> If the {{TaskManager}} is unregistered, it should try and register at the 
> {{ResourceManager}} leader. The registration messages are fenced via the 
> {{RmLeaderID}}.
> The ResourceManager may acknowledge the registration (or respond that the 
> TaskManager is AlreadyRegistered) or refuse the registration.
> Upon registration refusal, the TaskManager may have to kill itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4355) Implement TaskManager side of registration at ResourceManager

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424271#comment-15424271
 ] 

ASF GitHub Bot commented on FLINK-4355:
---

Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/2353


> Implement TaskManager side of registration at ResourceManager
> -
>
> Key: FLINK-4355
> URL: https://issues.apache.org/jira/browse/FLINK-4355
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Zhijiang Wang
>Assignee: Stephan Ewen
>
> If the {{TaskManager}} is unregistered, it should try and register at the 
> {{ResourceManager}} leader. The registration messages are fenced via the 
> {{RmLeaderID}}.
> The ResourceManager may acknowledge the registration (or respond that the 
> TaskManager is AlreadyRegistered) or refuse the registration.
> Upon registration refusal, the TaskManager may have to kill itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4411) [py] Chained dual input children are not properly propagated

2016-08-17 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-4411:

Fix Version/s: (was: 1.1.1)
   1.1.2

> [py] Chained dual input children are not properly propagated
> 
>
> Key: FLINK-4411
> URL: https://issues.apache.org/jira/browse/FLINK-4411
> Project: Flink
>  Issue Type: Bug
>  Components: Python API
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.2.0, 1.1.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4412) [py] Chaining does not properly handle broadcast variables

2016-08-17 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-4412:
---

 Summary: [py] Chaining does not properly handle broadcast variables
 Key: FLINK-4412
 URL: https://issues.apache.org/jira/browse/FLINK-4412
 Project: Flink
  Issue Type: Bug
  Components: Python API
Affects Versions: 1.1.1
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.2.0, 1.1.2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2378: [FLINK-4409] [build] Exclude JSR 305 from Hadoop dependen...

2016-08-17 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2378
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4409) class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424349#comment-15424349
 ] 

ASF GitHub Bot commented on FLINK-4409:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2378
  
+1 to merge


> class conflict between jsr305-1.3.9.jar and flink-shaded-hadoop2-1.1.1.jar
> --
>
> Key: FLINK-4409
> URL: https://issues.apache.org/jira/browse/FLINK-4409
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Renkai Ge
>Assignee: Stephan Ewen
>Priority: Minor
>
> It seems all classes in jsr305-1.3.9.jar can be found in 
> flink-shaded-hadoop2-1.1.1.jar,too.
> I can exclude these jars for a success assembly and run when I was using sbt
> {code:none}
> libraryDependencies ++= Seq(
>   "com.typesafe.play" %% "play-json" % "2.3.8",
>   "org.apache.flink" %% "flink-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-streaming-scala" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-clients" % "1.1.1"
> exclude("com.google.code.findbugs", "jsr305"),
>   "joda-time" % "joda-time" % "2.9.4",
>   "org.scalikejdbc" %% "scalikejdbc" % "2.2.7",
>   "mysql" % "mysql-connector-java" % "5.1.15",
>   "io.spray" %% "spray-caching" % "1.3.3"
> )
> {code}
> But I think it might be better to remove jsr305 dependency from Flink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4411) [py] Chained dual input children are not properly propagated

2016-08-17 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-4411.
---
Resolution: Fixed

Fixed in 1050847787b416399a6c03c0568969df93ed4822 and 
aa50e226c318f6f52ffdfad3bab4bd8b926ba410.

> [py] Chained dual input children are not properly propagated
> 
>
> Key: FLINK-4411
> URL: https://issues.apache.org/jira/browse/FLINK-4411
> Project: Flink
>  Issue Type: Bug
>  Components: Python API
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.2.0, 1.1.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4412) [py] Chaining does not properly handle broadcast variables

2016-08-17 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-4412.
---
Resolution: Fixed

Fixed in 84d28ba00f3e63a83132c1666c8dc8deec7800ba and 
445eaa986dca59b12458254c31624474b3bac632.

> [py] Chaining does not properly handle broadcast variables
> --
>
> Key: FLINK-4412
> URL: https://issues.apache.org/jira/browse/FLINK-4412
> Project: Flink
>  Issue Type: Bug
>  Components: Python API
>Affects Versions: 1.1.1
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.2.0, 1.1.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2302: [hotfix] [metrics] Refactor constructors and tests

2016-08-17 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2302
  
merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2349: [FLINK-4365] [metrics] Add documentation to MetricConfig

2016-08-17 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2349
  
merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4365) [metrics] MetricConfig has no documentation

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424379#comment-15424379
 ] 

ASF GitHub Bot commented on FLINK-4365:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2349
  
merging


> [metrics] MetricConfig has no documentation
> ---
>
> Key: FLINK-4365
> URL: https://issues.apache.org/jira/browse/FLINK-4365
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2302: [hotfix] [metrics] Refactor constructors and tests

2016-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2302


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-4302) Add JavaDocs to MetricConfig

2016-08-17 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-4302:

Fix Version/s: 1.1.2
   1.2.0

> Add JavaDocs to MetricConfig
> 
>
> Key: FLINK-4302
> URL: https://issues.apache.org/jira/browse/FLINK-4302
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
> Fix For: 1.2.0, 1.1.2
>
>
> {{MetricConfig}} has no comments at all. If you want to implement a custom 
> reporter and you want to implement its {{open}} method, a {{MetricConfig}} is 
> its argument. It will be helpful to add one class-level JavaDoc stating where 
> the config values are coming from etc.
> [~Zentol] what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4365) [metrics] MetricConfig has no documentation

2016-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424384#comment-15424384
 ] 

ASF GitHub Bot commented on FLINK-4365:
---

Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/2349


> [metrics] MetricConfig has no documentation
> ---
>
> Key: FLINK-4365
> URL: https://issues.apache.org/jira/browse/FLINK-4365
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2349: [FLINK-4365] [metrics] Add documentation to Metric...

2016-08-17 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/2349


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-4302) Add JavaDocs to MetricConfig

2016-08-17 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-4302.
---
Resolution: Fixed

Fixed in c5a9a3eb0061014fe59086505130596e924db7f9 and 
cbce10d100a19247476035ed53ea8196d19051f5

> Add JavaDocs to MetricConfig
> 
>
> Key: FLINK-4302
> URL: https://issues.apache.org/jira/browse/FLINK-4302
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
> Fix For: 1.2.0, 1.1.2
>
>
> {{MetricConfig}} has no comments at all. If you want to implement a custom 
> reporter and you want to implement its {{open}} method, a {{MetricConfig}} is 
> its argument. It will be helpful to add one class-level JavaDoc stating where 
> the config values are coming from etc.
> [~Zentol] what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2315: [FLINK-1984] Integrate Flink with Apache Mesos (T1...

2016-08-17 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2315#discussion_r75109625
  
--- Diff: flink-dist/pom.xml ---
@@ -113,8 +113,13 @@ under the License.
flink-metrics-jmx
${project.version}

+
+   
+   org.apache.flink
+   flink-mesos_2.10
+   ${project.version}
+   
--- End diff --

We only do it for yarn, because Hadoop 2.3 cannot properly support yarn, 
but we support Hadoop 2.3. Once we drop Hadoop 2.3 support, we can exclude that 
profile.

I would also build mesos always.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   4   >