Dear All,
I am going to implement Apache Cassandra in two different data-center
with 2 nodes in each ring. I also need to set replica 2 factor in same
data center. Over the data center data should be replicates between both
data center rings. Please help me or provide any document which help
Hello,
I am using Cassandra 1.1.1 and CQL3.
I have a cluster with 1 node (test environment)
Could you tell how to set the compaction strategy to Leveled Strategy for
an existing table ?
I have a table pns_credentials
jal@jal-VirtualBox:~/cassandra/apache-cassandra-1.1.1/bin$ ./cqlsh -3
Connected
Hi everyone,
I'm trying to use cassandra in order to store a "timeline", but with values
that must be unique (replaced). (So not really a timeline, but didn't find a
better word for it)
Let's me give you an example :
- An user have a list of friends
- Friends can change their nickname, status,
Sorry for the scheme that has not keep the right tabulation for some people...
Here's a space-version instead of a tabulation.
user1 row :| lte|
lte -1| lte -2| lte
-3
we are running somewhat queue-like with aggressive write-read patterns.
I was looking for scripting queries from live Cassandra installation, but I
didn't find any.
is there something like thrift-proxy or other query logging/scripting
engine ?
2012/8/30 aaron morton
> in terms of our high-rate
If you move from 7.X to 0.8X or 1.0X you have to rebuild sstables as
soon as possible. If you have large bloomfilters you can hit a bug
where the bloom filters will not work properly.
On Thu, Aug 30, 2012 at 9:44 AM, Илья Шипицин wrote:
> we are running somewhat queue-like with aggressive write-
You looking for the author of Spring Data Cassandra?
https://github.com/boneill42/spring-data-cassandra
If so, I guess that is me. =)
Did you get in touch with spring guys? They have cassandra support on
their spring data todo list. They might have some todo or feature list
they want to impl
Yes. I'm in contact with Oliver Gierke and Erez Mazor of Spring Data.
We are working on two fronts:
1) Spring Data support via JPA (using Kundera underneath)
- Initial attempt here:
http://brianoneill.blogspot.com/2012/07/spring-data-w-cassandra-using-jpa.h
tml
- Most recently (a
On Thu, Aug 30, 2012 at 1:14 AM, Adeel Akbar
wrote:
> Dear All,
>
> I am going to implement Apache Cassandra in two different data-center with 2
> nodes in each ring. I also need to set replica 2 factor in same data
> center. Over the data center data should be replicates between both data
> cent
Hello all,
This is my first setup of Cassandra and I'm having some issues running the
cqlsh tool.
Have any of you come across this error before? If so, please help.
/bin/cqlsh -h localhost -p 9160
No appropriate python interpreter found.
Thanks
James
All,
I'm adding a new node to an existing cluster that uses
ByteOrderedPartitioner. The documentation says that if I don't configure a
token, then one will be automatically generated to take load from an
existing node. What I'm finding is that when I add a new node, (super)
column lookups begin
in cassandra-cli, i did something like:
update column family xyz with
compaction_strategy='LeveledCompactionStrategy'
On Thu, Aug 30, 2012 at 5:20 AM, Jean-Armel Luce wrote:
>
> Hello,
>
> I am using Cassandra 1.1.1 and CQL3.
> I have a cluster with 1 node (test environment)
> Could you tell ho
What OS are you using?
On Thu, Aug 30, 2012 at 12:09 PM, Morantus, James (PCLN-NW) <
james.moran...@priceline.com> wrote:
> Hello all,
>
> This is my first setup of Cassandra and I'm having some issues running the
> cqlsh tool.
> Have any of you come across this error before? If so, please help.
On Thu, Aug 30, 2012 at 10:18 AM, Casey Deccio wrote:
> I'm adding a new node to an existing cluster that uses
> ByteOrderedPartitioner. The documentation says that if I don't configure a
> token, then one will be automatically generated to take load from an
> existing node.
> What I'm finding is
Red Hat Enterprise Linux Server release 5.8 (Tikanga)
Linux nw-mydb-s05 2.6.18-308.8.2.el5 #1 SMP Tue May 29 11:54:17 EDT 2012 x86_64
x86_64 x86_64 GNU/Linux
Thanks
From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Thursday, August 30, 2012 2:21 PM
To: user@cassandra.apache.org
Subject: Re:
RHEL 5 only ships with Python 2.4, which is pretty ancient and below what
cqlsh will accept. You can install Python 2.6 with EPEL enabled:
http://blog.nexcess.net/2011/02/25/python-2-6-for-centos-5/
On Thu, Aug 30, 2012 at 1:34 PM, Morantus, James (PCLN-NW) <
james.moran...@priceline.com> wrote:
pycassa already breaks up the query into smaller chunks, but you should try
playing with the buffer_size kwarg for get_indexed_slices, perhaps lowering
it to ~300, as Aaron suggests:
http://pycassa.github.com/pycassa/api/pycassa/columnfamily.html#pycassa.columnfamily.ColumnFamily.get_indexed_slices
Ah... Thanks
From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Thursday, August 30, 2012 2:42 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra - cqlsh
RHEL 5 only ships with Python 2.4, which is pretty ancient and below what cqlsh
will accept. You can install Python 2.6 with EPEL enabl
I tried as you said with cassandra-cli, and still unsuccessfully
[default@unknown] use test1;
Authenticated to keyspace: test1
[default@test1] UPDATE COLUMN FAMILY pns_credentials with
compaction_strategy='LeveledCompactionStrategy';
8ed12919-ef2b-327f-8f57-4c2de26c9d51
Waiting for schema agreemen
Thanks Guys for the answers...
The main issue here seems not the secondary index, but speed of searching
for random keys in column family.
I've done the experiment and queried the same 5000 rows not using index but
providing a list of keys to Pycassa... the speed was the same.
Although, using Sup
It seems to me you may want to revisit the design(but not 100% sure as I am not
sure I understand the entire context) a bit as I could see having partitions
and a few clients that poll in each partition so you can scale to infinity
basically with no issues. If you are doing all this polling fro
Consider trying…
UserTimeline CF
row_key:
column_names:
column_values: action details
To get the changes between two times specify the start and end timestamps and
do not include the other components of the column name.
e.g. from <1234, NULL, NULL> to <6789, NULL, NULL>
Cheers
---
>> we are running somewhat queue-like with aggressive write-read patterns.
We'll need some more details…
How much data ?
How many machines ?
What is the machine spec ?
How many clients ?
Is there an example of a slow request ?
How are you measuring that it's slow ?
Is there anything unusual in t
Looks like a bug.
Can you please create a ticket on
https://issues.apache.org/jira/browse/CASSANDRA and update the email thread ?
Can you include this: CFPropDefs.applyToCFMetadata() does not set the
compaction class on CFM
Thanks
-
Aaron Morton
Freelance Developer
@aaronmor
we are using functional tests ( ~500 tests in time).
it is hard to tell which query is slower, it is "slower in general".
same hardware. 1 node, 32Gb RAM, 8Gb heap. default cassandra settings.
as we are talking about functional tests, so we recreate KS just before
tests are run.
I do not know how
PS: everything above is in bytes, not bits.
On Fri, Aug 31, 2012 at 11:03 AM, rohit bhatia wrote:
> I was wondering how much would be the memory usage of an established
> connection in cassandra's heap space.
>
> We are noticing extremely frequent young generation garbage collections
> (3.2gb yo
On Thu, Aug 30, 2012 at 11:21 AM, Rob Coli wrote:
> On Thu, Aug 30, 2012 at 10:18 AM, Casey Deccio wrote:
> > I'm adding a new node to an existing cluster that uses
> > ByteOrderedPartitioner. The documentation says that if I don't
> configure a
> > token, then one will be automatically generat
> Could these 500 connections/second cause (on average) 2600Mb memory usage
> per 2 second ~ 1300Mb/second.
> or For 1 connection around 2-3Mb.
In terms of garbage generated it's much less about number of
connections as it is about what you're doing with them. Are you for
example requesting large
On Fri, Aug 31, 2012 at 11:27 AM, Peter Schuller <
peter.schul...@infidyne.com> wrote:
> > Could these 500 connections/second cause (on average) 2600Mb memory usage
> > per 2 second ~ 1300Mb/second.
> > or For 1 connection around 2-3Mb.
>
> In terms of garbage generated it's much less about number
29 matches
Mail list logo