Hi Ankit,
it looks like your Flink client does not pick up the proper Hadoop
configuration. In my prototype I fixed this problem by starting the
Flink client through the "hadoop jar" command line interface. However,
as Robert pointed out, the code still needs to be merged with the master
branch an
@Till: The default timeouts are high enough that such a timeout should
actually not occur, right? Increasing the timeouts cannot really be the
issue.
Might it be something different? What happens if there is an error in the
code that produces the input split? Is that properly handled, or is the
re
Shuo Xiang created FLINK-1460:
-
Summary: Typo fixes
Key: FLINK-1460
URL: https://issues.apache.org/jira/browse/FLINK-1460
Project: Flink
Issue Type: Improvement
Reporter: Shuo Xiang
Hi Robert,
I tried adding Daniel's changes to the 0.9 version of flink. So far I haven't
been able to get it working. Still getting the same errors.
Best,Ankit
On Tuesday, January 27, 2015 2:57 AM, Robert Metzger
wrote:
The code from Daniel has been written for the old YARN client.I
Thanks Robert. I am working for the Ads & Data team at Yahoo and we were
experimenting with flink. Without kerberos, we cannot talk to hdfs. Would you
have a timeline as to when kerberos support will be added to flink?
Thanks,Ankit
On Tuesday, January 27, 2015 2:49 AM, Robert Metzger
wro
I think that the machines have lost connection. That is most likely
connected to the heartbeat interval of the watch or transport failure
detector. The transport failure detector should actually be set to a
heartbeat interval of 1000 s and consequently it should not cause any
problems.
Which versi
I see the following line:
11:14:32,603 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://
fl...@cloud-26.dima.tu-berlin.de:51449] has failed, address is now gated
for [5000] ms. Reason is: [Disassociated].
Does that mean that the machines have lost co
There is already an ongoing discussion and an issue open about that:
http://apache-flink-incubator-mailing-list-archive.1008284.n3.nabble.com/Gather-a-distributed-dataset-td3216.html
I am sadly currently time-pressed with other things, but if nobody else
handles this, I expect to be able to work
I might add that the error only occurs when running with the RemoteExecutor
regardless of the number of TM. Starting the job in IntelliJ with the
LocalExecutor with dop 1 works just fine.
Best,
Christoph
On 28 Jan 2015, at 12:17, Bruecke, Christoph
wrote:
> Hi Robert,
>
> thanks for the qui
fyi
The problem seems to be that samoa-api uses Kryo 2.17 and Flink 2.24.0. All
flink-related tests pass if I upgrade samoa to 2.24.0. You can also ask at the
samoa-incubating dev-list if that will be ok to change. Maybe it would be good
to test the same version on storm, samza and s4 respectiv
Hello,
I am currently working on the integration of Flink Streaming API to
SAMOA and I have some problems with an exception that I take from the kryo
serialiser:
Caused by: java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at org.apache.flink.core.memory.Memory
Hi Robert,
thanks for the quick response. Here is the jobmanager-main.log:
PS: I’m subscribed now.
11:09:16,144 INFO org.apache.flink.yarn.ApplicationMaster$
- YARN daemon runs as hadoop setting user to execute Flink
ApplicationMaster/JobManager to hadoop
11:09:16,199 INF
Hi,
it seems that you are not subscribed to our mailing list, so I had to
manually accept your mail. Would be good if you could subscribe.
Can you send us also the log output of the JobManager?
If your YARN cluster has log aggregation activated, you can retrieve the
logs of a stopped YARN session
Hi,
I have written a job that reads a SequenceFile from HDFS using the
Hadoop-Compatibility add-on. Doing so results in a TimeoutException. I’m using
flink-0.9-SNAPSHOT with PR 342 ( https://github.com/apache/flink/pull/342 ).
Furthermore I’m running flink on yarn with two TM using
flink-yarn-
John Sandiford created FLINK-1459:
-
Summary: Collect DataSet to client
Key: FLINK-1459
URL: https://issues.apache.org/jira/browse/FLINK-1459
Project: Flink
Issue Type: Improvement
I think in Hadoop they use LimitedPrivate for the different components of
the project.
For example LimitedPrivate("yarn").
Here is a very good documentation on the topic:
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/InterfaceClassification.html
On Tue, Jan 27, 2015 at 3:
Let me clarify my suggestion: Let's put mandatory tags in the second
line of the commit message. That way, they can be filtered using git
log --grep=TAG and do not take away the first line's 80 characters.
On Wed, Jan 28, 2015 at 3:37 AM, Henry Saputra wrote:
> Just found out about this, thanks S
17 matches
Mail list logo