Hi,
I'm currently designing a backend service that would store user profile
information for different applications. Most of the properties in a user
profile would be unknown to the service and specified by the applications
using the service, so the properties would need to be added dynamically.
I
Yes, I know blowing them away would fix it and that is what I did, but I
want to understand why this happens in first place. I was upgrading from
1.1.10 to 1.2.3
On Fri, Apr 5, 2013 at 2:53 PM, Edward Capriolo wrote:
> This has happened before the save caches files were not compatible between
>
Hi,
After upgrading to the vnodes I created and enabled shuffle operation as
suggested. After running for a couple of hours I had to disable it
because nodes were not catching up with compactions. I repeated this
process 3 times (enable/disable).
I have 5 nodes and each of them had ~35GB. Af
It was not something you did wrong. The key cache format/classes involved
changed, there are a few jira issues around this:
https://issues.apache.org/jira/browse/CASSANDRA-4916
https://issues.apache.org/jira/browse/CASSANDRA-5253
Depending on how you moved between version you may or may not have
I am not familiar with shuffle, but if you attempt a shuffle and it fails
if would be a good idea to let compaction die down, or even trigger major
compaction on the nodes where the size grew. The reason is because once the
data files are on disk, even if they are duplicates, cassandra does not
kno
It also use off-heap memory out of JVM. SerializingCacheProvider should be
one of the case.
Best Regards!
Jian Jin
2013/4/6
> Thank you Aaron and Bryan for your advice.
>
> I have changed following parameters and now Cassandra running absolutely
> fine. Please review below setting and advice