Hi,
I am using AvroKeyRecordWriter, which wraps DataFileWriter#append(datum),
to create Avro. As I write data, sometimes an error will occur due to an
encoding problem (e.g. a non-nullable field isn't set in a record). I
would like to be able to log the AppendWriteException and continue writing
The documentation for AppendWriteException says, "When this is thrown,
the file is unaltered and may continue to be appended to." So, yes,
after you have caught this exception you may safely continue to append
entries to the still-open file.
Doug
On Wed, Jul 10, 2013 at 7:37 AM, Josh Spiegel wr
Sorry I missed that and thanks for the reply.
Thanks,
Josh
On Wed, Jul 10, 2013 at 9:05 AM, Doug Cutting wrote:
> The documentation for AppendWriteException says, "When this is thrown,
> the file is unaltered and may continue to be appended to." So, yes,
> after you have caught this exception
Hello,
I am using the DataFileWriter to create an avro file and was wondering is
doing a DataFileWriter#close the only way of committing the data to the
file? I tried doing a DataFileWriter#flush but no dice. I wanted a
capability where I can keep appending to an avro file without the need to
clos
On Wed, Jul 10, 2013 at 9:35 AM, kulkarni.swar...@gmail.com
wrote:
> I tried doing a DataFileWriter#flush but no dice.
That should work. How did it fail?
Doug
Anup,
That was a bug I mistakenly committed yesterday that broke the build.
I've fixed this and trunk should now build successfully again.
Sorry,
Doug
On Tue, Jul 9, 2013 at 7:15 PM, anup ahire wrote:
> Hello,
>
> I downloaded trunk today but I am not able to build it. I am seeing
> following
Looking at the source for flush on the DataFileWriter[1], it looks like we
do not flush the actual outputstream that we are writing data to. Should we
possibly have an out.flush() in there as well?
[1]
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.avro/avro/1.7.4/org/apache/avro/file/
The flush of the wrapper stream should also flush its underlying stream.
Doug
On Wed, Jul 10, 2013 at 9:52 AM, kulkarni.swar...@gmail.com
wrote:
> Looking at the source for flush on the DataFileWriter[1], it looks like we
> do not flush the actual outputstream that we are writing data to. Should