Hello there, I'm trying to sort out whether Cassandra is a good pick as the data store for a problem I've got.
The shape of the thing is a large number of hash tables. On a merely pretty big scale, it can all run on one pretty big machine. On a gigantic scale, which is an eventual goal, it will need to spread out over multiple computers. My concern is, in essence, whether Cassandra will scale down to 'merely big', or whether I need to code to something else for that purpose and then be able to switch. Imagine, oh, 49 tables, each storing a million hashes. Our current prototype is using redis.