Great to hear you sorted things out. Looking forward to the pull request!
On Mon, Nov 9, 2015 at 4:50 PM, Stephan Ewen wrote:
> Super nice to hear :-)
>
>
> On Mon, Nov 9, 2015 at 4:48 PM, Niels Basjes wrote:
>>
>> Apparently I just had to wait a bit longer for the first run.
>> Now I'm able to
Super nice to hear :-)
On Mon, Nov 9, 2015 at 4:48 PM, Niels Basjes wrote:
> Apparently I just had to wait a bit longer for the first run.
> Now I'm able to package the project in about 7 minutes.
>
> Current status: I am now able to access HBase from within Flink on a
> Kerberos secured cluste
Apparently I just had to wait a bit longer for the first run.
Now I'm able to package the project in about 7 minutes.
Current status: I am now able to access HBase from within Flink on a
Kerberos secured cluster.
Cleaning up the patch so I can submit it in a few days.
On Sat, Nov 7, 2015 at 10:01
The single shading step on my machine (SSD, 10 GB RAM) takes about 45
seconds. HDD may be significantly longer, but should really not be more
than 10 minutes.
Is your maven build always stuck in that stage (flink-dist) showing a long
list of dependencies (saying including org.x.y, including com.fo
Usually, if all the dependencies are being downloaded, i.e., on the first
build, it'll likely take 30-40 minutes. Subsequent builds might take 10
minutes approx. [I have the same PC configuration.]
-- Sachin Goel
Computer Science, IIT Delhi
m. +91-9871457685
On Sun, Nov 8, 2015 at 2:05 AM, Niels
How long should this take if you have HDD and about 8GB of RAM?
Is that 10 minutes? 20?
Niels
On Sat, Nov 7, 2015 at 2:51 PM, Stephan Ewen wrote:
> Hi Niels!
>
> Usually, you simply build the binaries by invoking "mvn -DskipTests clean
> package" in the root flink directory. The resulting progr
Hi Niels!
Usually, you simply build the binaries by invoking "mvn -DskipTests clean
package" in the root flink directory. The resulting program should be in
the "build-target" directory.
If the program gets stuck, let us know where and what the last message on
the command line is.
Please be awar
Hi,
Excellent.
What you can help me with are the commands to build the binary distribution
from source.
I tried it last Thursday and the build seemed to get stuck at some point
(at the end of/just after building the dist module).
I haven't been able to figure out why yet.
Niels
On 5 Nov 2015 14:5
Thank you for looking into the problem, Niels. Let us know if you need
anything. We would be happy to merge a pull request once you have verified
the fix.
On Thu, Nov 5, 2015 at 1:38 PM, Niels Basjes wrote:
> I created https://issues.apache.org/jira/browse/FLINK-2977
>
> On Thu, Nov 5, 2015 at 1
I created https://issues.apache.org/jira/browse/FLINK-2977
On Thu, Nov 5, 2015 at 12:25 PM, Robert Metzger wrote:
> Hi Niels,
> thank you for analyzing the issue so properly. I agree with you. It seems
> that HDFS and HBase are using their own tokes which we need to transfer
> from the client to
Hi Niels,
thank you for analyzing the issue so properly. I agree with you. It seems
that HDFS and HBase are using their own tokes which we need to transfer
from the client to the YARN containers. We should be able to port the fix
from Spark (which they got from Storm) into our YARN client.
I think
Update on the status so far I suspect I found a problem in a secure
setup.
I have created a very simple Flink topology consisting of a streaming
Source (the outputs the timestamp a few times per second) and a Sink (that
puts that timestamp into a single record in HBase).
Running this on a non-
Hi Niels,
You're welcome. Some more information on how this would be configured:
In the kdc.conf, there are two variables:
max_life = 2h 0m 0s
max_renewable_life = 7d 0h 0m 0s
max_life is the maximum life of the current ticket. However, it may be
renewed up to a time span of max
Hi,
Thanks for your feedback.
So I guess I'll have to talk to the security guys about having special
kerberos ticket expiry times for these types of jobs.
Niels Basjes
On Fri, Oct 23, 2015 at 11:45 AM, Maximilian Michels wrote:
> Hi Niels,
>
> Thank you for your question. Flink relies entirely
Hi Niels,
Thank you for your question. Flink relies entirely on the Kerberos
support of Hadoop. So your question could also be rephrased to "Does
Hadoop support long-term authentication using Kerberos?". And the
answer is: Yes!
While Hadoop uses Kerberos tickets to authenticate users with service
Hi,
I want to write a long running (i.e. never stop it) streaming flink
application on a kerberos secured Hadoop/Yarn cluster. My application needs
to do things with files on HDFS and HBase tables on that cluster so having
the correct kerberos tickets is very important. The stream is to be
ingeste
16 matches
Mail list logo