On Tue, Sep 30, 2014 at 2:25 AM, Robert Coli <rc...@eventbrite.com> wrote:

> On Mon, Sep 22, 2014 at 3:50 AM, Carlos Scheidecker <nando....@gmail.com>
> wrote:
>
>> I can successfully read a file to a ByteBuffer and then write to a
>> Cassandra blob column. However, when I retrieve the value of the column,
>> the size of the ByteBuffer retrieved is bigger than the original ByteBuffer
>> where the file was read from. Writing to the disk, corrupts the image.
>>
>
> Probably don't write binary blobs like images into a database, use a
> distributed filesystem?
>

I've very successfully stored lots of small images into Cassandra so I have
to disagree with that far too quick conclusion. Cassandra always read blobs
in their entirety, so it's definitively not very good with very large
blobs, but there is many cases where images are known to be pretty small (I
was personally storing thumbnails) and in those cases, it is my experience
that Cassandra is a very viable solution.


> But I agree that this behavior sounds like a bug, I would probably file it
> as a JIRA on http://issues.apache.org and then tell the list the URL of
> the JIRA you filed.
>

I actually doubt it is a bug, and it's almost certainly not a Cassandra bug
(so please, do *no* open a JIRA on http://issues.apache.org). I suspect a
bad use of the ByteBuffer API (which is definitively a very confusing API,
but that's what Java gives us). Typically, in your snippet of code above,
the line:
byte[] data = new byte[buffer.limit()];
is incorrect. 'buffer.limit()' is not the number of valid bytes in the
buffer, you should use 'buffer.remaining()' for that. You should also be
careful with messing with 'arrayOffset', a line that
    buf.position(buf.arrayOffset());
(also from one of you snippet above) is almost surely wrong.

--
Sylvain

Reply via email to