Do you also set fs.hdfs.hadoopconf in flink-conf.yaml
(https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#common-options)?

On Thu, Aug 11, 2016 at 2:47 PM, Dong-iL, Kim <kim.s...@gmail.com> wrote:
> Hi.
> In this case , I used standalone cluster(aws EC2) and I wanna connect to
> remote HDFS machine(aws EMR).
> I register the location of core-site.xml as below.
> does it need other properties?
>
> <configuration>
>     <property>
>         <name>fs.defaultFS</name>
>         <value>hdfs://…:8020</value>
>     </property>
>     <property>
>         <name>hadoop.security.authentication</name>
>         <value>simple</value>
>     </property>
>     <property>
>         <name>hadoop.security.key.provider.path</name>
>         <value>kms://....:9700/kms</value>
>     </property>
>     <property>
>         <name>hadoop.job.ugi</name>
>         <value>hadoop</value>
>     </property>
>
> Thanks.
>
> On Aug 11, 2016, at 9:31 PM, Stephan Ewen <se...@apache.org> wrote:
>
> Hi!
>
> Do you register the Hadoop Config at the Flink Configuration?
> Also, do you use Flink standalone or on Yarn?
>
> Stephan
>
> On Tue, Aug 9, 2016 at 11:00 AM, Dong-iL, Kim <kim.s...@gmail.com> wrote:
>>
>> Hi.
>> I’m trying to set external hdfs as state backend.
>> my os user name is ec2-user. hdfs user is hadoop.
>> there is a permission denied exception.
>> I wanna specify hdfs user name.
>> I set hadoop.job.ugi in core-site.xml and HADOOP_USER_NAME on command
>> line.
>> but not works.
>> what shall I do?
>> thanks.
>
>
>

Reply via email to