On Tue, May 10, 2011 at 3:44 PM, Terje Marthinussen
<tmarthinus...@gmail.com> wrote:
> Hi,
> If you make a supercolumn today, what you end up with is:
> - short  + "Super Column name"
> - int (local deletion time)
> - long (delete time)
> Byte array of  columns each with:
>   - short + "column name"
>   - int (TTL)
>   - int (local deletion time)
>   - long (timestamp)
>   - int + "value of column"

Almost but not exactly. First there is a 1 byte flag that is use to
know if the column is
a tombstone or an expiring one. Second, for tombstones the local
deletion time is actually
stored as part of the value, so we don't have that 'int (local
deletion time)'. Third, We
only serialize the TTL for expiring column, and in that case we
serialize two int (the TTL
and the local expiration time (which is maybe the one you called local
deletion time above)).

Anyway, to sum that up, expiring columns are 1 byte more and
non-expiring ones are 7 bytes
less. Not arguing, it's still fairly verbose, especially with tons of
very small columns.

> That is, meta data and serialization overhead adds up to:
> 2+4+8 = 14 bytes for the supercolumn
> 2+4+4+8+4 = 22 bytes for each column the supercolumn have
> Yes, disk space is cheap and all that, but trying to handle a few billion
> supercolumns which each have some 30-50 subcolumns, I am looking at some
> 1.2-1.5TB of meta data which makes the metadata by itself some 3-4 times the
> orginal data. That does seem a bit excessive when you also throw in RF=3 and
> the requirement for extra diskspace to safely survive compactions.
> And yes, this is without considering the overhead of column names.
> I can see a handful of way to reduce this quite a bit, for instance by:
> - not adding TTL/deletion time if not needed (some compact bitmap structure
> to turn on/off fields?)

As said, we already do that.

> - inherit timestamps from the supercolumn

Columns inside a supercolumn have no reason to share the same timestamp (or
even close ones for that matter). But maybe you're talking about something more
subtle, in which case yes there is ways to compress the data.

> There may also be some interesting ways to compress this data assuming that
> the timestamps are generally in the same time areas (shared "prefixes"
> for instance) , but that gets a bit more complex.
> Any opinions or plans?

There is and have been lots of discussion around this. The first
ticket to tackle this
is actually pretty old https://issues.apache.org/jira/browse/CASSANDRA-47 (but
you do know about this ticket). There's also talk about rewriting completely the
file format (https://issues.apache.org/jira//browse/CASSANDRA-674).

I don't take a lot of risk saying that at least some form of
compression will happen
some day. I wouldn't go as far as giving you a date though.

--
Sylvain

> Sorry, I could not find any JIRA's on the topic, but I guess I am not
> surprised if it exists.
> Yes, I could serialize this myself outside of cassandra, but that would sort
> of defeat the purpose of using a more advanced storage system like
> cassandra.
> Regards,
> Terje
>
>
>
>
>

Reply via email to