Hello,

I am working on some experimental modifications to Cassandra and I was wondering if someone can help me with an easy answer to this:

Say I have a column family in the system keyspace where I am keeping some bookkeeping data (couple hundred thousand columns) at times.

I don't want to load all of the columns into memory before operating on them, so I can slice by some limit at a time:

ColumnFamilyStore store = Table.open(Table.SYSTEM_TABLE).getColumnFamilyStore(MY_CF);

        if (store.isEmpty())
            return null;

        ByteBuffer rowKey = GreenCassandraConsistencyManager.MY_ROW;
DecoratedKey<?> epkey = StorageService.getPartitioner().decorateKey(rowKey); QueryFilter filter = QueryFilter.getSliceFilter(epkey, path, startColumn, endColumn, false, limit); ColumnFamily cf = ColumnFamilyStore.removeDeleted(store.getColumnFamily(filter), (int)(System.currentTimeMillis() / 1000));

My question is, can I do something similar if I have thousands of *sub*columns. I am doing this:

endpointColumn.getColumn(columnName).getSubColumns()), which returns a structure of all the subcolumns.

But this seems like it would load everything into memory at once. Can I easily slice this like I slice the regular columns?

Thanks,
Bill Katsak


Reply via email to