-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hey Philip,
how could I enable "append to and existing file" in Hadoop?
Thanks,
Robert
Philip Zeyliger wrote:
> HDFS does not allow you to overwrite bytes of a file that have already been
> written. The only operations it supports are read (an existing file), write
> (a new file), and (in newer versions, not always enabled) append (to an
> existing file).
>
> -- Philip
>
> On Fri, May 1, 2009 at 5:56 PM, Robert Engel <[email protected]>wrote:
>
> Hello,
>
> I am using Hadoop on a small storage cluster (x86_64, CentOS 5.3,
> Hadoop-0.19.1). The hdfs is mounted using fuse and everything seemed
> to work just fine so far. However, I noticed that I cannot:
>
> 1) use svn to check out files on the mounted hdfs partition
> 2) request that stdout and stderr of Globus jobs is written to the
> hdfs partition
>
> In both cases I see following error message in /var/log/messages:
>
> fuse_dfs: ERROR: could not connect open file fuse_dfs.c:1364
>
> When I run fuse_dfs in debugging mode I get:
>
> ERROR: cannot open an hdfs file in O_RDWR mode
> unique: 169, error: -5 (Input/output error), outsize: 16
>
> My question is if this is a general limitation of Hadoop or if this
> operation is just not currently supported? I searched Google and JIRA
> but could not find an answer.
>
> Thanks,
> Robert
>
>>
>>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkn/K+UACgkQrxCAtr5BXdNfFwCfU8pz7gV6zi8aLOLTjEAb8fIS
j4kAn1/3DnGZZP7TTewV4QTB0S43/tNV
=3BF/
-----END PGP SIGNATURE-----