[jira] [Resolved] (HDFS-2764) HA: TestBackupNode is failing

2012-01-09 Thread Aaron T. Myers (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-2764.
--

Resolution: Invalid

Curiously, TestBackupNode fails reliably on my box, on both the HA branch and 
on trunk. Given that this doesn't fail for others, it must be some local 
environment issue. Resolving this issue as invalid.

> HA: TestBackupNode is failing
> -
>
> Key: HDFS-2764
> URL: https://issues.apache.org/jira/browse/HDFS-2764
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>
> Looks like it has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Jenkins build is back to stable : Hadoop-Hdfs-trunk #920

2012-01-09 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : Hadoop-Hdfs-0.23-Build #133

2012-01-09 Thread Apache Jenkins Server
See 




Re: Merging some trunk changes to 23

2012-01-09 Thread Eli Collins
On Sat, Jan 7, 2012 at 10:41 AM, Arun C Murthy  wrote:
> (Sorry, I missed hdfs-dev@ and my mail filters didn't help either).
>
> Sounds good.
>
> As per the email I sent out to general@ a couple of days ago, 0.23.1 is very 
> close right now (there are a handful of perf bugs) after which we should be 
> good for the merge.
>

Thanks.  Will hold of any non-bug fix merges to branch-23 until 23.1
is released.  Btw I see there's a branch for merging the trunk rpc
changes (branch-0.23-PB).

Thanks,
Eli


[jira] [Created] (HDFS-2772) HA: On transition to active, standby should not swallow ELIE

2012-01-09 Thread Aaron T. Myers (Created) (JIRA)
HA: On transition to active, standby should not swallow ELIE


 Key: HDFS-2772
 URL: https://issues.apache.org/jira/browse/HDFS-2772
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


EditLogTailer#doTailEdits currently catches, logs, and swallows 
EditLogInputException. This is fine in the case when the standby is sitting 
idly behind tailing logs. However, when the standby is transitioning to active, 
swallowing this exception is incorrect, since it could cause the standby to 
silently fail to load all the edits before becoming active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2773) HA: reading edit logs from an earlier version leaves blocks in under-construction state

2012-01-09 Thread Todd Lipcon (Created) (JIRA)
HA: reading edit logs from an earlier version leaves blocks in 
under-construction state
---

 Key: HDFS-2773
 URL: https://issues.apache.org/jira/browse/HDFS-2773
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker


In HDFS-2602, the code for applying OP_ADD and OP_CLOSE was changed a bit, and 
the new code has the following problem: if an OP_CLOSE includes new blocks (ie 
not previously seen in an OP_ADD) then those blocks will remain in the "under 
construction" state rather than being marked "complete". This is because 
{{updateBlocks}} always creates {{BlockInfoUnderConstruction}} regardless of 
the opcode. This bug only affects the upgrade path, since in trunk we always 
persist blocks with OP_ADDs before we call OP_CLOSE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-1910) when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time

2012-01-09 Thread Konstantin Shvachko (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-1910.
---

  Resolution: Fixed
   Fix Version/s: 1.1.0
Target Version/s: 1.1.0, 0.22.1  (was: 0.22.1, 1.1.0)

I just committed this to branch 1.

> when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice 
> every time
> 
>
> Key: HDFS-1910
> URL: https://issues.apache.org/jira/browse/HDFS-1910
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Gokul
>Priority: Minor
>  Labels: critical-0.22.0
> Fix For: 1.1.0, 0.22.1
>
> Attachments: saveImageOnce-v0.22.patch, saveImageOnce-v1.1.patch
>
>
> when image and edits dir are configured same, the fsimage flushing from 
> memory to disk will be done twice whenever saveNamespace is done. this may 
> impact the performance of backupnode/snn where it does a saveNamespace during 
> every checkpointing time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2762) TestCheckpoint is timing out

2012-01-09 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2762.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

Indeed, looks like you're right. Committed to the branch, thanks Uma

> TestCheckpoint is timing out
> 
>
> Key: HDFS-2762
> URL: https://issues.apache.org/jira/browse/HDFS-2762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>Assignee: Uma Maheswara Rao G
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-2762.patch
>
>
> TestCheckpoint is timing out on the HA branch, and has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2774) Use TestDFSIO to test HDFS, and Failed with the exception: All datanodes are bad. Aborting...

2012-01-09 Thread bdsyq (Created) (JIRA)
Use TestDFSIO to test HDFS, and Failed with the exception: All datanodes are 
bad. Aborting...
-

 Key: HDFS-2774
 URL: https://issues.apache.org/jira/browse/HDFS-2774
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.20.2
 Environment: 20 nodes with 2-core cpu and 1G RAM  20G Hard disk ,1 
switch
Reporter: bdsyq
 Fix For: 0.20.2


use TestDFSIO to test the HDFS
use the commond:  hadoop jar TestDFSIO - write -nrFiles 10 -fileSize 500
when running ,errors occurs:
12/01/09 16:00:45 INFO mapred.JobClient: Task Id : 
attempt_201201091556_0001_m_06_2, Status : FAILED
java.io.IOException: All datanodes 192.168.0.17:50010 are bad. Aborting...
 at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2556)
 at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2102)
 at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2265)
attempt_201201091637_0002_m_05_0: log4j:WARN No appenders could be found 
for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201201091637_0002_m_05_0: log4j:WARN Please initialize the log4j 
system properly.

I don't know why?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2724) NN web UI can throw NPE after startup, before standby state is entered

2012-01-09 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2724.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

Committed to branch, thanks Eli.

> NN web UI can throw NPE after startup, before standby state is entered
> --
>
> Key: HDFS-2724
> URL: https://issues.apache.org/jira/browse/HDFS-2724
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2724.txt
>
>
> There's a brief period of time (a few seconds) after the NN web server has 
> been initialized, but before the NN's HA state is initialized. If 
> {{dfshealth.jsp}} is hit during this time, a {{NullPointerException}} will be 
> thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




HDFS FUSE compile

2012-01-09 Thread Nikolaos Hatzopoulos
Hi Guys,

cd /home/arion/hadoop-0.20.203.0/src/c++/libhdfs

I made the configure file executable
chmod +x configure

I run it ./configure

I fixed the m32 m64 issue but:

gcc -DPACKAGE_NAME=\"libhdfs\" -DPACKAGE_TARNAME=\"libhdfs\"
-DPACKAGE_VERSION=\"0.1.0\" "-DPACKAGE_STRING=\"libhdfs 0.1.0\""
-DPACKAGE_BUGREPORT=\"omal...@apache.org\" -DPACKAGE=\"libhdfs\"
-DVERSION=\"0.1.0\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1
-DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1
-DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1
-DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" "-Dsize_t=unsigned int"
-DHAVE_FCNTL_H=1 -Dconst= -Dvolatile= -I. -I. -g -O2 -DOS_LINUX -DDSO_DLFCN
-DCPU=\"amd64\" -m64 -I/usr/java/jdk1.6.0_27/include
-I/usr/java/jdk1.6.0_27/include/linux -Wall -Wstrict-prototypes -MT hdfs.lo
-MD -MP -MF .deps/hdfs.Tpo -c hdfs.c  -fPIC -DPIC -o .libs/hdfs.o


In file included from /usr/include/bits/types.h:31,
 from /usr/include/sys/types.h:31,
 from hdfs.h:22,
 from hdfs.c:19:
/usr/lib/gcc/x86_64-redhat-linux/4.1.2/include/stddef.h:214: error:
duplicate ‘unsigned’
/usr/lib/gcc/x86_64-redhat-linux/4.1.2/include/stddef.h:214: error: two or
more data types in declaration specifiers
hdfs.c: In function ‘errnoFromException’:
hdfs.c:125: warning: cast from pointer to integer of different size
hdfs.c:125: warning: cast from pointer to integer of different size
hdfs.c:125: warning: cast from pointer to integer of different size
hdfs.c:125: warning: cast from pointer to integer of different size
hdfs.c:125: warning: cast from pointer to integer of different size
hdfs.c:125: warning: cast from pointer to integer of different size
hdfs.c:131: warning: cast from pointer to integer of different size
hdfs.c:131: warning: cast from pointer to integer of different size
hdfs.c:131: warning: cast from pointer to integer of different size
hdfs.c:131: warning: cast from pointer to integer of different size
hdfs.c:131: warning: cast from pointer to integer of different size
hdfs.c:131: warning: cast from pointer to integer of different size
hdfs.c:137: warning: cast from pointer to integer of different size
hdfs.c:137: warning: cast from pointer to integer of different size
hdfs.c:137: warning: cast from pointer to integer of different size
hdfs.c:137: warning: cast from pointer to integer of different size
hdfs.c:137: warning: cast from pointer to integer of different size
hdfs.c:137: warning: cast from pointer to integer of different size
hdfs.c: In function ‘hdfsConnectAsUser’:
hdfs.c:228: warning: cast from pointer to integer of different size
hdfs.c:228: warning: cast from pointer to integer of different size
hdfs.c:228: warning: cast from pointer to integer of different size
hdfs.c:228: warning: cast from pointer to integer of different size
hdfs.c:228: warning: cast from pointer to integer of different size
hdfs.c:228: warning: cast from pointer to integer of different size
hdfs.c: In function ‘hdfsGetHosts’:
hdfs.c:1589: warning: cast from pointer to integer of different size
hdfs.c:1589: warning: cast from pointer to integer of different size
hdfs.c: In function ‘getFileInfoFromStat’:
hdfs.c:1802: warning: cast from pointer to integer of different size
hdfs.c:1802: warning: cast from pointer to integer of different size
hdfs.c:1817: warning: cast from pointer to integer of different size
hdfs.c:1817: warning: cast from pointer to integer of different size
hdfs.c:1832: warning: cast from pointer to integer of different size
hdfs.c:1832: warning: cast from pointer to integer of different size


Any suggestions??

---Nikos Hatzopoulos


HDFS FUSE SOLVED

2012-01-09 Thread Nikolaos Hatzopoulos
I had to do:

 ant compile  -Dcompile.c++=true -Dlibhdfs=true

I am not very familiar with java compile :)

you need to update the documentation thought

--Nikos