Using a describe columnfamilies/tables query, Cassandra only gives me
in 1.2.0-beta1 and the error mentioned in
https://issues.apache.org/jira/browse/CASSANDRA-4946 in 1.2.0-beta2,
despite there actually being columnfamilies in the currently selected/used
keyspace. Selecting/updating the columnfam
Yes, I already have do it, but in my application, my configuration is
needed. But the problem was solved. The problem was a leak of memory in my
code.
Thanks.
2012/11/14 aaron morton
> Have you tried using the defaults in cassandra-env.sh ? Your setting are
> very different.
>
>
> https://gith
Some times, when I try to insert a data in Cassandra with Method:
static void createColumnFamily(String keySpace, String columnFamily){
synchronized (mutex){
Iface cs = new CassandraServer();
CfDef cfDef = new CfDef(keySpace, columnFamily);
cfDef = cfDef.setComparator_type(comparator.toStr
Hi,
I have a secondary index for a column family which is connected to a
keyspace that spans over three data centers. I observed that the index is
not complete on one of the data centers. Reason for that conclusion is, I
tried to retrieve an object, using the secondary index, in DC1 and it was a
s
Good information Edward.
For my case, we have good size of RAM (76G) and the heap is 8G. So I set the
row cache to be 800M as recommended. Our column is kind of big, so the hit
ratio for row cache is around 20%, so according to datastax, might just turn
the row cache altogether.
Anyway, for re
Not sure I understand your question.
Do you have an existing schema you want some help with ?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 14/11/2012, at 12:39 PM, jer...@simpleartmarketing.com wrote:
> I have mu
> In both cases the array is the PRIMARY_KEY.
I'm not sure what you mean by the "array"
The vector_name and list_name columns are used as "variable names" to identify
a particular vector or list. They are the storage engine "row key".
Cheers
-
Aaron Morton
Freelance Cassandra
May be https://issues.apache.org/jira/browse/CASSANDRA-4561
Can you upgrade to 1.1.6 ?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 14/11/2012, at 11:39 PM, Alain RODRIGUEZ wrote:
> Hi, I am running C* 1.1.2 and
4946 is marked as a dup of https://issues.apache.org/jira/browse/CASSANDRA-4913
Looks like it's fixed in the trunk now.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 15/11/2012, at 2:19 AM, Timmy Turner wrote:
>
Without knowing what the schema is and what the load is or anything about the
workload I would suggest:
Using 4G for the heap and 800 MB for the new heap. The 128 MB setting you have
is way too small. If you are running our of heap space the simple thing is to
add more.
Using the default GC s
Out of interest why are you creating column families by making direct calls on
an embedded cassandra instance ? I would guess you life would be easier if you
defined a schema in CQL or CLI.
> I already read in the documentation that this error occurs when more than one
> thread/processor acce
An array would be a list of groups of items. In my case I want a list/array
of line items. An order has certain characteristics and one of them is a
list of the items that are being ordered. Say ever line item has an id,
price, and description so one such "array" would look like:
1 $4.00 "This
> But for about 12 hours the secondary index has completed only 3%.
How much data do you have?
> I use Cassandra 1.0.7 and use CL of ONE for my read requests.
There is your problem.
If you use CL ONE for any time type of query you can get inconsistent results.
Hinted Handoff and Read Repair ma
Like this?
cqlsh:dev> CREATE TABLE my_orders(
... id int,
... item_id int,
... price decimal,
... title text,
... PRIMARY KEY(id, item_id)
... );
cqlsh:dev> insert into my_orders
... (id, item_id, price, title)
.
Oh! That's obviously the exact same issue. I didn't find this thread while
searching about my issue.
We will upgrade.
Thanks for the link.
2012/11/14 aaron morton
> May be https://issues.apache.org/jira/browse/CASSANDRA-4561
>
> Can you upgrade to 1.1.6 ?
>
> Cheers
>
>-
>
I hope I am not bugging you but now what is the purpose of PRIMARY_KEY(id,
item_id)? By expressing the KEY as two values this basically gives the
database a hint that this will be an array? Is there an implicit INDEX on id
and item_id? Thanks again.
-Original Message-
From: aaron morton [m
it means the column family uses composite key, which gives you
additional capabilities like order by in the where clause
On Wed, Nov 14, 2012 at 5:27 PM, Kevin Burton wrote:
> I hope I am not bugging you but now what is the purpose of PRIMARY_KEY(id,
> item_id)? By expressing the KEY as two value
>> database a hint that this will be an array?
Things are going to be easier if you stop thinking about arrays :)
For background
http://www.datastax.com/docs/1.1/dml/using_cql#using-composite-primary-keys
The first part of the primary key is the storage engine row key. This is the
ring that cas
oh, as for the number of rows, it's 165. How long would you expect it
to be read back?
On Thu, Nov 15, 2012 at 3:57 AM, Wei Zhu wrote:
> Good information Edward.
> For my case, we have good size of RAM (76G) and the heap is 8G. So I set
> the row cache to be 800M as recommended. Our column
In the below example I am thinking that id is the order id. Would there be
considerable duplication if there are other column families/tables that also
are identified or have a key of id? It seems that id potentially could be
duplicated for each column family/table. Is that just the way it is? Whil
> Is that just the way it is?
yes.
Denormalise your model and store the data in a format that supports the read
queries you want to run. Cassandra is predicated on the idea that storage is
cheap.
For *most* columns cassandra stores the column name and the value and some meta
data.
Cheers
---
OOM at deserializing 747321th row
On Thu, Nov 15, 2012 at 9:08 AM, Manu Zhang wrote:
> oh, as for the number of rows, it's 165. How long would you expect it
> to be read back?
>
>
> On Thu, Nov 15, 2012 at 3:57 AM, Wei Zhu wrote:
>
>> Good information Edward.
>> For my case, we have good s
Curious where did you see this?
Thanks.
-Wei
Sent from my Samsung smartphone on AT&T
Original message
Subject: Re: unable to read saved rowcache from disk
From: Manu Zhang
To: user@cassandra.apache.org
CC:
OOM at deserializing 747321th row
On Thu, Nov 15, 2012 at 9:0
add a counter and print out myself
On Thu, Nov 15, 2012 at 1:51 PM, Wz1975 wrote:
> Curious where did you see this?
>
>
> Thanks.
> -Wei
>
> Sent from my Samsung smartphone on AT&T
>
>
> Original message
> Subject: Re: unable to read saved rowcache from disk
> From: Manu Zhang
How big is your heap? Did you change the jvm parameter?
Thanks.
-Wei
Sent from my Samsung smartphone on AT&T
Original message
Subject: Re: unable to read saved rowcache from disk
From: Manu Zhang
To: user@cassandra.apache.org
CC:
add a counter and print out myself
O
3G, other jvm parameters are unchanged.
On Thu, Nov 15, 2012 at 2:40 PM, Wz1975 wrote:
> How big is your heap? Did you change the jvm parameter?
>
>
>
> Thanks.
> -Wei
>
> Sent from my Samsung smartphone on AT&T
>
>
> Original message
> Subject: Re: unable to read saved rowca
Before shut down, you saw rowcache has 500m, 1.6m rows, each row average
300B, so 700k row should be a little over 200m, unless it is reading more,
maybe tombstone? Or the rows on disk have grown for some reason, but row
cache was not updated? Could be something else eats up the memory.
27 matches
Mail list logo