[jira] [Resolved] (HDFS-6280) Provide option to

2014-04-24 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-6280.
--

Resolution: Invalid

Accidentally hit "create" too soon. :)

> Provide option to 
> --
>
> Key: HDFS-6280
> URL: https://issues.apache.org/jira/browse/HDFS-6280
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Aaron T. Myers
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6280) Provide option to

2014-04-24 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HDFS-6280:


 Summary: Provide option to 
 Key: HDFS-6280
 URL: https://issues.apache.org/jira/browse/HDFS-6280
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Aaron T. Myers






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6281) Provide option to use the NFS Gateway without having to use the Hadoop portmapper

2014-04-24 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HDFS-6281:


 Summary: Provide option to use the NFS Gateway without having to 
use the Hadoop portmapper
 Key: HDFS-6281
 URL: https://issues.apache.org/jira/browse/HDFS-6281
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


In order to use the NFS Gateway on operating systems with the rpcbind 
privileged registration bug, we currently require users to shut down and 
discontinue use of the system-provided portmap daemon, and instead use the 
portmap daemon provided by Hadoop. Alternately, we can work around this bug if 
we tweak the NFS Gateway to perform its port registration from a privileged 
port, and still let users use the system portmap daemon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6282) re-add testIncludeByRegistrationName

2014-04-24 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-6282:
--

 Summary: re-add testIncludeByRegistrationName
 Key: HDFS-6282
 URL: https://issues.apache.org/jira/browse/HDFS-6282
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6282.001.patch

Re-add a test of using DataNode registration names in an HDFS host include file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6254) hdfsConnect segment fault where namenode not connected

2014-04-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-6254.
-

Resolution: Not a Problem

> hdfsConnect segment fault where namenode not connected
> --
>
> Key: HDFS-6254
> URL: https://issues.apache.org/jira/browse/HDFS-6254
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.2.0
> Environment: Linux Centos 64bit
>Reporter: huang ken
>Assignee: Chris Nauroth
>
> When namenode is not started, the libhdfs client will cause segment fault 
> while connecting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6283) Write end user documentation for xattrs.

2014-04-24 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-6283:
---

 Summary: Write end user documentation for xattrs.
 Key: HDFS-6283
 URL: https://issues.apache.org/jira/browse/HDFS-6283
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Chris Nauroth


Update the File System Shell documentation to cover the new getfattr and 
setfattr commands.  If warranted, consider adding a separate dedicated page for 
fuller discussion of the xattrs model and how the feature works in more detail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6284) don't let checkDiskError() run concurrently

2014-04-24 Thread Liang Xie (JIRA)
Liang Xie created HDFS-6284:
---

 Summary: don't let checkDiskError() run concurrently
 Key: HDFS-6284
 URL: https://issues.apache.org/jira/browse/HDFS-6284
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.4.0, 3.0.0
Reporter: Liang Xie
Assignee: Liang Xie


In current codebase, there're amount of  datanode.checkDiskError calls,  so it 
possible occurs that not a few threads call checkDiskError concurrently. IMHO, 
it'll be more reasonable if we avoid the concurrent running behavior.
1. FsDatasetImpl.checkDataDir has a "synchronized" keyword, so the concurrent 
checking will be done one by one internally.
2. checkDir() operation will take lengthy time once hitting a sick disk. if we 
have more than one thread to check disk in series,  the cost from upper invoker 
like BlockReceiver will be unacceptable.

Patch will be attached later, any comments are welcome !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6285) tidy an error log inside BlockReceiver

2014-04-24 Thread Liang Xie (JIRA)
Liang Xie created HDFS-6285:
---

 Summary: tidy an error log inside BlockReceiver
 Key: HDFS-6285
 URL: https://issues.apache.org/jira/browse/HDFS-6285
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.4.0, 3.0.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HDFS-6285.txt

>From this log from our production cluster:
2014-04-22,10:39:05,476 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
IOException in BlockReceiver constructor. Cause is 

After reading code, i knew the cause was null which means no disk error. but 
the above log looked fragmentary. Attached is a minor change to tidy it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)