I keep thinking about the usage of cassandra timestamps and feel that for a
lot of applications swallowing a 2-4x additional cost to to memory might be
a nonstarter.

Has there been any discussion of using alternative date encodings?

Maybe 1ms resolution is too high ….. perhaps 10ms resolution?  or even 100ms
resolution?

Using 4 bytes and 100ms resolution your can fit in 13 years of timestamps if
you use the time you deploy the cassandra DB (aka 'now') as epoch.

Even 5 bytes at 1ms resolution is 34 years.

That's 37% less memory!

In most of our applications, we would NEVER see concurrent writers on the
same key because we partition the jobs so that this doesn't happen.

I'd probably be fine with 100ms resolution.

Allowing the user to tune this would be interesting as well.

-- 

Founder/CEO Spinn3r.com

Location: *San Francisco, CA*
Skype: *burtonator*

Skype-in: *(415) 871-0687*

Reply via email to