Hi -
Can anyone help me with some Cas data migration issues I'm having?
I'm attempting to migrate a dev ring (5 nodes) to a larger production one
(6 nodes). Both are hosted on EC2.
Cluster Info:
Small: Cas v.1.2.6, Rep Factor 1, vnodes enabled
Larger: Cas v.1.2.9, Rep Factor 3, vnodes enabled
On 06.09.2013, at 17:07, Jan Algermissen wrote:
>
> On 06.09.2013, at 13:12, Alex Major wrote:
>
>> Have you changed the appropriate config settings so that Cassandra will run
>> with only 2GB RAM? You shouldn't find the nodes go down.
>>
>> Check out this blog post
>> http://www.opensourc
Hi,
I keep seeing the error message below in my log files. Can someone explain what
it means and how to prevent it?
INFO [OptionalTasks:1] 2013-09-07 13:46:27,160 MeteredFlusher.java (line 58)
flushing high-traffic column family CFS(Keyspace='pr
oducts', ColumnFamily='product') (estimated 3
That's a good approach. You could also migrate in-place if you're confident
your migration algorithm is correct, but for more safety having another CF
is better.
If you have a huge volume of data to be migrated (millions of rows or
more), I'd suggest you to use Hadoop to perform these migrations (
Richard,
Good advice. Thank you! I'll work on tuning IP tables so that only my other
Cassandra nodes can connect to mx4j. Good thing I caught this, I was just
making sure JNA was working when I saw this!
Sent from my iPhone
On Sep 8, 2013, at 5:40 AM, Richard Low wrote:
> On 8 September 2013
I'm following John Berryman's blog "Building the Perfect Cassandra Test
Environment" concerning running C* in a very small amount of memory. he
recommends theses settings in cassandra.yaml
reduce_cache_sizes_at: 0
reduce_cache_capacity_to: 0
(Blog is at
http://www.opensourceconnections.com/20
On 8 September 2013 02:55, Tim Dunphy wrote:
> Hey all,
>
> I'm seeing this exception in my cassandra logs:
>
> Exception during http request
> mx4j.tools.adaptor.http.HttpException: file
> mx4j/tools/adaptor/http/xsl/w00tw00t.at.ISC.SANS.DFind:) not found
> at
> mx4j.tools.adaptor.http.
Thank you for you reply.
I will look into this. I cannot not get my head around why the scenario I
am describing does not work though. Should I report an issue around this or
is this expected behaviour? A similar setup is described on this blog post
by the development lead.
http://www.datastax.co