Can you open a ticket with the exception you saw?

On Fri, Mar 4, 2011 at 2:51 PM, Terje Marthinussen
<tmarthinus...@gmail.com> wrote:
> Ah, yes, I should have noticed that distinction.
> We actually hit this overflow on a row that was more than 60GB (yes, we had
> to count the number of digits a few times to make sure).
> Terje
> On Sat, Mar 5, 2011 at 5:41 AM, Jonathan Ellis <jbel...@gmail.com> wrote:
>>
>> Second try:
>>
>> - this isn't used in row size (which is not limited to 2GB)
>> - it's used both for the column index summary and index-block reading,
>> both of which should be well under 2GB
>> - however, I don't see any technical reason this method should return
>> an int instead of a long
>> - if we make that change we should probably do additional sanity
>> checks in the callers, which will have the necessary context to
>> provide better error messages
>>
>> On Fri, Mar 4, 2011 at 1:36 PM, Terje Marthinussen
>> <tmarthinus...@gmail.com> wrote:
>> > Hi,
>> >
>> > Any good reason this guy
>> >  public int bytesPastMark(FileMark mark)
>> >    {
>> >        assert mark instanceof BufferedRandomAccessFileMark;
>> >        long bytes = getFilePointer() - ((BufferedRandomAccessFileMark)
>> > mark).pointer;
>> >
>> >        assert bytes >= 0;
>> >        if (bytes > Integer.MAX_VALUE)
>> >            throw new UnsupportedOperationException("Overflow: " +
>> > bytes);
>> >        return (int) bytes;
>> >    }
>> >
>> > does not show an error more like "Overflow: Maximum row size 2GB.
>> > Currently:" + bytes?
>> >
>> > Error you get today is not exactly self explaining :)
>> >
>> > Terje
>> >
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of DataStax, the source for professional Cassandra support
>> http://www.datastax.com
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Reply via email to