It sounds good to me. If your job keeps running (longer than the
expiration time), I think it implies that Krb5LoginModule will use
your newly generated cache. It's my pleasure to help you.

Best,
Yangze Guo

On Mon, Jun 1, 2020 at 10:47 PM Nick Bendtner <buggi...@gmail.com> wrote:
>
> Hi Guo,
> The auto renewal happens fine, however I want to generate a new ticket with a 
> new renew until period so that the job can run longer than 7 days, I am 
> talking about the second paragraph your email, I have set a custom cache by 
> setting KRB5CCNAME . Just want to make sure that Krb5LoginModule does a 
> re-login like you said. I think it does because I generated a new ticket when 
> the flink job was running and the job continues to auto renew the new ticket. 
> Let me know if you can think of any pit falls. Once again i really want to 
> thank you for your help and your time.
>
> Best,
> Nick.
>
> On Mon, Jun 1, 2020 at 12:29 AM Yangze Guo <karma...@gmail.com> wrote:
>>
>> Hi, Nick.
>>
>> Do you mean that you manually execute "kinit -R" to renew the ticket cache?
>> If that is the case, Flink already sets the "renewTGT" to true. If
>> everything is ok, you do not need to do it yourself. However, it seems
>> this mechanism has a bug and this bug is not fixed in all JDK
>> versions. Please refer to [1].
>>
>> If you mean that you generate a new ticket cache in the same place(by
>> default /tmp/krb5cc_uid), I'm not sure will Krb5LoginModule re-login
>> with your new ticket cache. I'll try to do a deeper investigation.
>>
>> [1] https://bugs.openjdk.java.net/browse/JDK-8058290.
>>
>> Best,
>> Yangze Guo
>>
>> On Sat, May 30, 2020 at 3:07 AM Nick Bendtner <buggi...@gmail.com> wrote:
>> >
>> > Hi Guo,
>> > Thanks again for your inputs. If I periodically renew the kerberos cache 
>> > using an external process(kinit) on all flink nodes in standalone mode, 
>> > will the cluster still be short lived or will the new ticket in the cache 
>> > be used and the cluster can live till the end of the new expiry ?
>> >
>> > Best,
>> > Nick.
>> >
>> > On Sun, May 24, 2020 at 9:15 PM Yangze Guo <karma...@gmail.com> wrote:
>> >>
>> >> Yes, you can use kinit. But AFAIK, if you deploy Flink on Kubernetes
>> >> or Mesos, Flink will not ship the ticket cache. If you deploy Flink on
>> >> Yarn, Flink will acquire delegation tokens with your ticket cache and
>> >> set tokens for job manager and task executor. As the document said,
>> >> the main drawback is that the cluster is necessarily short-lived since
>> >> the generated delegation tokens will expire (typically within a week).
>> >>
>> >> Best,
>> >> Yangze Guo
>> >>
>> >> On Sat, May 23, 2020 at 1:23 AM Nick Bendtner <buggi...@gmail.com> wrote:
>> >> >
>> >> > Hi Guo,
>> >> > Even for HDFS I don't really need to set 
>> >> > "security.kerberos.login.contexts" . As long as there is the right 
>> >> > ticket in the ticket cache before starting the flink cluster it seems 
>> >> > to work fine. I think even [4] from your reference seems to do the same 
>> >> > thing. I have defined own ticket cache specifically for flink cluster 
>> >> > by setting this environment variable. Before starting the cluster I 
>> >> > create a ticket by using kinit.
>> >> > This is how I make flink read this cache.
>> >> > export KRB5CCNAME=/home/was/Jaas/krb5cc . I think even flink tries to 
>> >> > find the location of ticket cache using this variable [1].
>> >> > Do you see any problems in setting up hadoop security module this way ? 
>> >> > And thanks a lot for your help.
>> >> >
>> >> > [1] 
>> >> > https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/KerberosUtils.java
>> >> >
>> >> > Best,
>> >> > Nick
>> >> >
>> >> >
>> >> >
>> >> > On Thu, May 21, 2020 at 9:54 PM Yangze Guo <karma...@gmail.com> wrote:
>> >> >>
>> >> >> Hi, Nick,
>> >> >>
>> >> >> From my understanding, if you configure the
>> >> >> "security.kerberos.login.keytab", Flink will add the
>> >> >> AppConfigurationEntry of this keytab to all the apps defined in
>> >> >> "security.kerberos.login.contexts". If you define
>> >> >> "java.security.auth.login.config" at the same time, Flink will also
>> >> >> keep the configuration in it. For more details, see [1][2].
>> >> >>
>> >> >> If you want to use this keytab to interact with HDFS, HBase and Yarn,
>> >> >> you need to set "security.kerberos.login.contexts". See [3][4].
>> >> >>
>> >> >> [1] 
>> >> >> https://ci.apache.org/projects/flink/flink-docs-master/ops/security-kerberos.html#jaas-security-module
>> >> >> [2] 
>> >> >> https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/JaasModule.java
>> >> >> [3] 
>> >> >> https://ci.apache.org/projects/flink/flink-docs-master/ops/security-kerberos.html#hadoop-security-module
>> >> >> [4] 
>> >> >> https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
>> >> >>
>> >> >> Best,
>> >> >> Yangze Guo
>> >> >>
>> >> >> On Thu, May 21, 2020 at 11:06 PM Nick Bendtner <buggi...@gmail.com> 
>> >> >> wrote:
>> >> >> >
>> >> >> > Hi guys,
>> >> >> > Is there any difference in providing kerberos config to the flink 
>> >> >> > jvm using this method in the flink configuration?
>> >> >> >
>> >> >> > env.java.opts:  -Dconfig.resource=qa.conf 
>> >> >> > -Djava.library.path=/usr/mware/flink-1.7.2/simpleapi/lib/ 
>> >> >> > -Djava.security.auth.login.config=/usr/mware/flink-1.7.2/Jaas/kafka-jaas.conf
>> >> >> >  -Djava.security.krb5.conf=/usr/mware/flink-1.7.2/Jaas/krb5.conf
>> >> >> >
>> >> >> > Is there any difference in doing it this way vs providing it from 
>> >> >> > security.kerberos.login.keytab .
>> >> >> >
>> >> >> > Best,
>> >> >> >
>> >> >> > Nick.

Reply via email to