Stephan it is exactly the same exception -UknownHost bal bla
In Jboss for example the external are also not working, only the 0.0.0.0 -
this is AWS NAT.
We will proceed with VPC and then I will update you about what we get.
Thanks for your help.
On Sun, Aug 30, 2015 at 6:05 PM, Stephan Ewen wrote
Why are the external IPs not working? Any kind of exception you can share?
On Sun, Aug 30, 2015 at 5:02 PM, Alexey Sapozhnikov
wrote:
> it will not help, since the internal IPs are changing in AWS from time to
> time and you should use only Public IP, which is not recognizable by flink.
> Thats
it will not help, since the internal IPs are changing in AWS from time to
time and you should use only Public IP, which is not recognizable by flink.
Thats why all app servers, for example JBoss or even Flume are using
"0.0.0.0"
On Sun, Aug 30, 2015 at 5:53 PM, Stephan Ewen wrote:
> What you can
What you can do as a temporary workaround is to actually enter the IP
address for "jobmanager.rpc.address" - that circumvents the DNS.
Just saw that Akka 2.4 (released some time in the near future) apparently
introduces an option to listen to all network interfaces.
On Sun, Aug 30, 2015 at 4:44 P
Fully understand.
1.My suggestion is to drop Akka and take something else, since this issue
is really big
2.Not hostname not the endpoint are not working, clarifying the VPC topic
now.
On Sun, Aug 30, 2015 at 5:41 PM, Stephan Ewen wrote:
> Not being able to bind to 0.0.0.0 is an Akka issue. It i
Not being able to bind to 0.0.0.0 is an Akka issue. It is sometimes
annoying, but I have not found a good way around this.
The problem is that the address to bind to an the address used by others to
send messages to the node is the same. (
https://groups.google.com/forum/#!topic/akka-user/cRZmf8u_v
Hi.
First off - many thanks for your efforts and prompt help.
We will try to find how to do it with DNS server on VPC.
however, absence of "0.0.0.0" is definitely a huge bug - just think about
the current situation : if I dont have a VPC, I cant invoke the Flink
functionality remotely in Amazon.
We
Weird, the root cause seems to be "java.net.UnknownHostException:
ip-172-36-98: unknown error"
Flink does not do anything more special than
"InetAddress.getByName(hostname)".
Is it that you can either not resolve the hostname "ip-172-36-98" (maybe
add the fully qualified domain name), or is there
>From this blog post, it seems that this hostname is not resolvable:
https://holtstrom.com/michael/blog/post/401/Hostname-in-Amazon-Linux.html
Can you easily activate a DNS server in the VPC?
0.0.0.0 is not supported because of some requirements of the Akka framework.
But you should be able to use
Here is the exception from the moment we tried to put in
jobmanager.rpc.address the hostname of the machine which is ip-172-36-98
looks like it doesnt recognize this address.
Why it doesnt support "0.0.0.0"
13:43:14,805 INFO org.apache.flink.runtime.jobmanager.JobManager
-
--
Flink uses Akka internally, and Akka requires to have exact host/ip
addresses to bind to. Maybe that is the crash you see.
Having the exact exception would help.
On Sun, Aug 30, 2015 at 3:57 PM, Robert Metzger wrote:
> How is Flink crashing when you start it on the Linux machine in Amazon?
>
>
How is Flink crashing when you start it on the Linux machine in Amazon?
Can you post the exception here?
On Sun, Aug 30, 2015 at 3:48 PM, Alexey Sapozhnikov
wrote:
> Hello Stephan.
>
> We run this Linux machine on Amazon, which I predict, most of the people
> will do.
> We tried to put "0.0.0.0
Hello Stephan.
We run this Linux machine on Amazon, which I predict, most of the people
will do.
We tried to put "0.0.0.0" or Public IP of the machine- Flink crashes on
start, it doesnt recognize himself.
It is very strange that it doesnt work with 0.0.0.0- basically this is a
way in Java to make
Robert Metzger created FLINK-2598:
-
Summary: NPE when arguments are missing for a "-m yarn-cluster" job
Key: FLINK-2598
URL: https://issues.apache.org/jira/browse/FLINK-2598
Project: Flink
Is
Do you start Flink via YARN? In that case the "jobmanager.rpc.address" is
not used, because YARN assigns containers/nodes.
If you start Flink in "standalone" mode, this should be the address of the
node that runs the JobManager. It will be used as the host/IP that Flink
binds to. The same host sho
Hello all.
Firstly- thank you for your valuable advices.
We did some very fine tuned pinpoint test and comes to following conclusions
1.We run on Ubuntu 14 flink for hadoop 2.7
2.Once we copy our Java client program directy to the machine and run it
directly there it worked very good
The program
+1
- Built against Hadoop 2.7 / Scala 2.10
- Ran manual examples in local-cluster and fake-cluster (2 task managers)
mode:
- Ran examples with built-in and external data using
https://github.com/aljoscha/FliRTT
- Logs and .out are clean
On Sun, 30 Aug 2015 at 14:20 Ufuk Celebi wrote:
> +1 (bin
The output of the YARN session should look like this:
Flink JobManager is now running on quickstart.cloudera:39956
JobManager Web Interface:
http://quickstart.cloudera:8088/proxy/application_1440768826963_0005/
Number of connected TaskManagers changed to 1. Slots available: 1
On Sun, Aug 30, 2
+1 (binding)
- Checked checksums, GPG
- Release does not contain any binaries
- Building properly (custom Hadoop versions as well)
- POMs point to same parent version
- Read README.md
- Local start/stop scripts
- Ran example jobs
- Quickstart dependencies
On Sun, Aug 30, 2015 at 12:05 PM, Stepha
+1
Performed the following checks:
- Checked LICENSE / NOTICE files
- Checked the README file
- Built against Hadoop 2.6.0
- Built against Scala 2.11
- Executed all tests
- Builds in IntelliJ
- Manual tests all work
On Fri, Aug 28, 2015 at 1:27 PM, Ufuk Celebi wrote:
> Dear community,
>
The only thing I can think of is that you are not using the right host/port
for the JobManager.
When you start the YARN session, it should print the host where the
JobManager runs. You also need to take the port from there, as in YARN, the
port is usually not 6123. Yarn starts many services on one
Hello.
Let me clarify the situation.
1. We are using flink 0.9.0 for Hadoop 2.7. We connected it to HDFS 2.7.1.
2. Locally, our program is working: once we run flink as ./start-local.sh,
we are able to connect and run the createRemoteEnvironment and Execute
methods.
3.Due to our architecture and ba
22 matches
Mail list logo