If I am correct than you need to restart cassandra whenever you adding a new
KeySpace. Thats another concern.

Vineet Daniel
Cell          : +91-8106217121
Websites :
Blog <http://vinetedaniel.blogspot.com>   |
Linkedin<http://in.linkedin.com/in/vineetdaniel>
|  Twitter <https://twitter.com/vineetdaniel>





On Fri, Sep 3, 2010 at 2:58 PM, Mike Peters
<cassan...@softwareprojects.com>wrote:

>  Very interesting. Thank you
>
> So it sounds like other than being able to quickly truncate
> customer-keyspaces, with Cassandra there's no real benefit in keeping each
> customer data in a separate keyspace.
>
> We'll suffer on the memory side with all the switching between keyspaces
> and we're better off storing all customer data under the same keyspace?
>
>
>
> On 9/2/2010 11:29 PM, Aaron Morton wrote:
>
> Create one big happy love in keyspace. Use the key structure to identify
> the different clients data.
>
>  The is more support for multi tenancy systems but a lot of the memory
> configuration is per keyspace/column family, so you cannot run that many
> keyspaces.
>
>  This page has some more information
> http://wiki.apache.org/cassandra/MultiTenant
>
>   Aaron
>
>
> On 03 Sep, 2010,at 01:25 PM, Mike Peters 
> <cassan...@softwareprojects.com><cassan...@softwareprojects.com>wrote:
>
>    Hi,
>
> We're in the process of migrating 4,000 MySQL client databases to
> Cassandra. All database schemas are identical.
>
> With MySQL, we used to provision a separate 'database' per each client,
> to make it easier to shard and move things around.
>
> Does it make sense to migrate the 4,000 MySQL databases to 4,000
> keyspaces in Cassandra? Or should we stick with a single keyspace?
>
> My concerns are -
> #1. Will every single node end up with 4k folders under /cassandra/data/?
>
> #2. Performance: Will Cassandra work better with a single keyspace +
> lots of keys, or thousands of keyspaces?
>
> -
>
> Granted it's 'cleaner' to have a separate keyspace per each client, but
> maybe that's not the best approach with Cassandra.
>
> Thoughts?
>
>
>

Reply via email to