Re: silent data loss during append
What version are you using? On Thu, Apr 14, 2011 at 3:55 PM, Thanh Do wrote: > Hi all, > > I have recently seen silent data loss in our system. > Here is the case: > > 1. client appends to some block > 2. for some reason, commitBlockSynchronization > returns successfully with synclist = [] (i.e empty) > 3. in the client code, NO exception is thrown, and >client appends successfully. > 4. However, the block replicas are then removed from >datanodes, causing data loss. > > Have any one seen this before? > Is this behavior by design or a bug? > > Many thanks, > Thanh > > > >
open a file in read and write mode simultaneously?
Hello, I am trying to open a file in WRONLY | CREAT and RDONLY mode simultaneously, in HDFS. I use one handle for the first mode and another pointing to the same file in the second mode. I am not able to do so. While I believe that HDFS does allow for concurrent reads and writes. Please let me know, how I can achieve concurrent read and write to the same file. Thanks, Aastha. -- Aastha Mehta Intern, NetApp, Bangalore 4th year undergraduate, BITS Pilani E-mail: aasth...@gmail.com
Hadoop-Hdfs-trunk - Build # 638 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/638/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 713453 lines...] [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target [echo] Including clover.jar in the war file ... [cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war [cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war cactifywar: test-cactus: [echo] Free Ports: startup-44211 / http-44212 / https-44213 [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests... [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf [cactus] - [cactus] Running tests against Tomcat 5.x @ http://localhost:44212 [cactus] - [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]... [cactus] Tomcat 5.x starting... Server [Apache-Coyote/1.1] started [cactus] WARNING: multiple versions of ant detected in path for junit [cactus] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class [cactus] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.459 sec [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.31 sec [cactus] Tomcat 5.x started on port [44212] [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.324 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.33 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.867 sec [cactus] Tomcat 5.x is stopping... [cactus] Tomcat 5.x is stopped BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed! Total time: 52 minutes 0 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 2 tests failed. FAILED: org.apache.hadoop.hdfsproxy.TestAutho
Re: silent data loss during append
I am using cloudera's distribution version: hadoop-0.20.2+738. On Thu, Apr 14, 2011 at 6:23 PM, Ted Dunning wrote: > What version are you using? > > > On Thu, Apr 14, 2011 at 3:55 PM, Thanh Do wrote: > >> Hi all, >> >> I have recently seen silent data loss in our system. >> Here is the case: >> >> 1. client appends to some block >> 2. for some reason, commitBlockSynchronization >> returns successfully with synclist = [] (i.e empty) >> 3. in the client code, NO exception is thrown, and >>client appends successfully. >> 4. However, the block replicas are then removed from >>datanodes, causing data loss. >> >> Have any one seen this before? >> Is this behavior by design or a bug? >> >> Many thanks, >> Thanh >> >> >> >> >
Re: silent data loss during append
Hi Thanh, There were some known bugs in the append feature in Apache Hadoop 0.19. These bugs were fixed in both Apache 0.20-append and Apache 0.21. For Cloudera's distribution, I have no idea. You may want to ask your questions in Cloudera's mailing lists. I am sorry for the bugs. Regards, Nicholas From: Thanh Do To: Ted Dunning Cc: hdfs-u...@hadoop.apache.org; hdfs-dev@hadoop.apache.org Sent: Fri, April 15, 2011 8:02:05 AM Subject: Re: silent data loss during append I am using cloudera's distribution version: hadoop-0.20.2+738. On Thu, Apr 14, 2011 at 6:23 PM, Ted Dunning wrote: > What version are you using? > > > On Thu, Apr 14, 2011 at 3:55 PM, Thanh Do wrote: > >> Hi all, >> >> I have recently seen silent data loss in our system. >> Here is the case: >> >> 1. client appends to some block >> 2. for some reason, commitBlockSynchronization >> returns successfully with synclist = [] (i.e empty) >> 3. in the client code, NO exception is thrown, and >>client appends successfully. >> 4. However, the block replicas are then removed from >>datanodes, causing data loss. >> >> Have any one seen this before? >> Is this behavior by design or a bug? >> >> Many thanks, >> Thanh
[jira] [Created] (HDFS-1840) Terminate LeaseChecker when all writing files are closed.
Terminate LeaseChecker when all writing files are closed. - Key: HDFS-1840 URL: https://issues.apache.org/jira/browse/HDFS-1840 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs client Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE In {{DFSClient}}, when there are files opened for write, a {{LeaseChecker}} thread is started for updating the leases periodically. However, it never terminates when when all writing files are closed. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1841) Enforce read-only permissions in FUSE open()
Enforce read-only permissions in FUSE open() Key: HDFS-1841 URL: https://issues.apache.org/jira/browse/HDFS-1841 Project: Hadoop HDFS Issue Type: Bug Components: contrib/fuse-dfs Affects Versions: 0.20.2 Environment: Linux 2.6.35 Reporter: Brian Bloniarz Priority: Minor fuse-dfs currently allows files to be created on a read-only filesystem: $ fuse_dfs_wrapper.sh dfs://example.com:8020 ro ~/hdfs $ touch ~/hdfs/foobar Attached is a simple patch, which does two things: 1) Checks the read_only flag inside dfs_open(). 2) Passes the read-only mount option to FUSE when ro is specified on the commandline. This is probably a better long-term solution; the kernel will enforce the read-only operations without it being necessary inside the client. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1842) Cannot upgrade 0.20.203 to 0.21 with an editslog present
Cannot upgrade 0.20.203 to 0.21 with an editslog present Key: HDFS-1842 URL: https://issues.apache.org/jira/browse/HDFS-1842 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.20.203.0 Reporter: Allen Wittenauer Priority: Blocker If a user installs 0.20.203 and then upgrades to 0.21 with an editslog present, 0.21 will corrupt the file system due to opcode re-usage. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira