two problems about opscenter 3.2

2013-07-30 Thread yue . zhang
problem-1: - when I edit any “schema-> kespacke-> cf”,then report “Error saving column family: required argument is not a float” problem-2: - from OpsCenter 3.2 release note(http://www.datastax.com/docs/opscenter/release_notes#opscenter-3-2), it tell me “Added support fo

Re: two problems about opscenter 3.2

2013-07-30 Thread Alain RODRIGUEZ
I also have the following message in the dashboard : Error loading events: Cannot call method 'slice' of null With events and alerts not showing up. No error in the logs (opscenterd and agents) 2013/7/30 yue.zhang > problem-1: > > - > > when I edit any “schema-> kespacke->

Re: two problems about opscenter 3.2

2013-07-30 Thread Alain RODRIGUEZ
I also see "Waiting for agent information..." In the top of the dashboard page, but again nothing on the logs. 2013/7/30 Alain RODRIGUEZ > I also have the following message in the dashboard : > > Error loading events: Cannot call method 'slice' of null > > With events and alerts not showing up.

Re: AssertionError: Incorrect row data size

2013-07-30 Thread Pavel Kirienko
Cassandra 1.2.8 still have this issue. Possible recipe to reproduce: create the table as described in the first message of this thread; write 3000 rows of 10MB each at the rate about 0.1..1 request per second. Maybe this behavior is caused by incremental compaction of large rows... On Mon, Jul 2

Re: AssertionError: Incorrect row data size

2013-07-30 Thread Pavel Kirienko
Also it is probably worth to mention: 1. I see no other errors in logs except that one; 2. Sometimes connected clients receive "Request did not complete within rpc_timeout.", even if they are accessing other tables. 3. Sometimes, some cells from another tables may read as NULL when they are in fact

Deletes, range ghost issue

2013-07-30 Thread Alain RODRIGUEZ
Hi I am sorry to open one more thread about this topic, but I though I understood roughly how distributed deletes work and while testing it, it appeared that I don't. I have deleted a lot of counter columns and "normal" rows and columns. On Opscenter, I was still able to see the row keys, with a

Determining Snitch at Runtime

2013-07-30 Thread Ben Boule
Hi Everyone, Sorry if the answer to this question is out there, I can't seem to find it by searching. Is there a way to read the endpoint_snitch at runtime, preferably from a CQL query, but fine if it's available through an older API? (Or JMX?) We're automating creating clusters & provisioni

RE: Determining Snitch at Runtime

2013-07-30 Thread Ben Boule
It looks like I can infer this from the local table in the system schema by looking at the datacenter value. Would this be a bad bad thing to do? Thanks, Ben From: Ben Boule [ben_bo...@rapid7.com] Sent: Tuesday, July 30, 2013 11:41 AM To: user@cassandra.apache.or

CQL3 query

2013-07-30 Thread ravi prasad
Hi,   I have a data modelling question.  I'm modelling for an use case where, an object can have multiple facets and each facet can have multiple revisions and the query pattern looks like "get latest 'n' revisions for all facets for an object (n=1,2,3)".   With a table like below: create tabl

Re: CQL3 query

2013-07-30 Thread baskar.duraikannu.db
SlicePredicate only support “N” columns. So, you need to query one facet at a time OR you can query m columns such that it returns n revisions. You may need intelligence to increase or decrease m columns heuristically. From: ravi prasad Sent: ‎Tuesday‎, ‎July‎ ‎30‎, ‎2013 ‎8‎:‎11‎ ‎PM To: cas

CUSTOM index

2013-07-30 Thread baskar.duraikannu.db
Hello, Both Cassandra CLI and CQLSH have option to specify an custom index. Can you point to an example custom index implementation, if there is one? Thanks Baskar

Intresting issue with getting Order By to work...

2013-07-30 Thread Tony Anecito
Hi All, I am trying to have a table that I can have a key for one CQL to get a row based on a key and another CQL to where I have a list of keys using the "IN" keyword and have an ORDER BY clause. The data I am getting is from the Column value not the Column Name. The issue I am having is tryin

VM dimensions for running Cassandra and Hadoop

2013-07-30 Thread Jan Algermissen
Hi, thanks for the helpful replies last week. It looks as if I will deploy Cassandra on a bunch of VMs and I am now in the process of understanding what the dimensions of the VMS should be. So far, I understand the following: - I need at least 3 VMs for a minimal Cassandra setup - I should get

Re: VM dimensions for running Cassandra and Hadoop

2013-07-30 Thread Jonathan Haddad
Having just enough RAM to hold the JVM's heap generally isn't a good idea unless you're not planning on doing much with the machine. Any memory not allocated to a process will generally be put to good use serving as page cache. See here: http://en.wikipedia.org/wiki/Page_cache Jon On Tue, Jul 3