Thanks Aaron.
Yes problem was occurring when starting 1 client as a embedded Cassandra
Server. Memtables doesn't flush if there is no data in associate CFs. Thrift
does take care of this. Additionally on restarting Cassandra server it picked
it up from commit logs as part of migration.
Cheers,
I believe connection pooling is still not in place with latest CQL JDBC stuff.
From: Aayush Jain
Sent: Tuesday, July 05, 2011 9:55 AM
To: user@cassandra.apache.org
Subject: connection issue
Hi,
When I am using multithreading with Cassandra Query Language ,I have to make
connections for each thre
Hi All, sorry for taking so long to answer. I was away from the
internet.
>> Héctor, when you say "I have upgraded all my cluster to 0.8.1", from
> >> which version was
> >> that: 0.7.something or 0.8.0 ?
0.7.6-2 to 0.8.1
> This is the same behavior I reported in 2768 as Aaron referenced ...
> >
> Is it possible the snapshots from different nodes have the same name?
The directory name will be made up of the current timestamp on the machine and
the optional name passed via the command line.
The SSTables from different nodes may have name collisions. If you are
aggregating data from mult
I am out of the office until 07/11/2011.
I will be out office, returning back on 07/11/2011.
I will respond to your message when I return to the office on
07/11//2011.
For anything urgent please contact Clara Liang( Clara C Liang/Silicon
Valley/IBM).
For anything urgent please contact Clara
Ok. Thanks vivek.
From: Vivek Mishra
Sent: 05 July 2011 13:00
To: user@cassandra.apache.org
Subject: RE: connection issue
I believe connection pooling is still not in place with latest CQL JDBC stuff.
From: Aayush Jain
Sent: Tuesday, July 05, 2011 9:55 AM
To: user@cassandra.apache.org
Subject: c
Hi,
The JConsole shows that the capacity > 0.
10x
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Row-cache-tp6532887p6549420.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Hello Sebastien,
I am trying to load around 50GB data on a single node.But I am
facing lot of issues for doing the same.
In your previous post you mentioned that you were able to handle 1TB per
node..Could you please let me know the hardware configuration of your
system.
Than
Thanks
On Thu, Jun 30, 2011 at 10:09 PM, aaron morton wrote:
> cassandra.in.sh is old skool 0.6 series, 0.7 series uses cassandra-env.sh.
> The packages put it in /etc/cassandra.
>
> This works for me at the end of cassandra-env.sh
>
> JVM_OPTS="$JVM_OPTS -Dpasswd.properties=/etc/cassandra/passwd
Hi,
Our hardware configuration is very simple:
1 x 8 Core Processor
32 GB of memory
1 x 250GB SATA disk for the OS/Swap plugged in the motherboard
1 x 250GB SATA disk for Cassandra's commit log plugged in the motherboard
1 x RAID card
2 x 1TB SATA disks for Cassandra in RAID-0 plugged on the RAID
Thanks a lot Sebastien.
Did you use hadoop map reduce or bulk loading techniques for loading data?
regards,
Priyanka
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/How-to-scale-Cassandra-tp6545491p6550029.html
Sent from the cassandra-u...@incub
On 06/15/2011 08:57 AM, Chris Burroughs wrote:
> Cassandra DC's first meetup of the pizza and talks variety will be on
> July 6th. There will be an introductory sort of presentation and a
> totally cool one on Pig integration.
>
> If you are in the DC area it would be great to see you there.
>
>
cqlsh> CREATE KEYSPACE twissandra with
... strategy_class =
... 'org.apache.cassandra.locator.NetworkTopologyStrategy'
... and strategy_options=[{DC1:1, DC2:1}];
Bad Request: line 4:37 no viable alternative at character ']'
What is wrong with the above syntax ?
Thanks.
Hi Priyanka,
We're also using Hadoop on a seperate 2x 1TB SATA disks in RAID0 on the RAID
card.
Regards,
SC
On Tue, Jul 5, 2011 at 10:39 AM, Priyanka wrote:
> Thanks a lot Sebastien.
> Did you use hadoop map reduce or bulk loading techniques for loading data?
>
> regards,
> Priyanka
>
>
> --
Hi,
Using thrift and get_range_slices call with tokenrange. Using Random
Partionioner. Have only tried this on > 0.7.5
Used to work in 0.6.4 or earlier version for me , but I notice that it does
not work for me anymore. The need is to iterate over a token range to do
some bookkeeping.
The logic is
replace the s_o line with
and strategy_options:DC1=1 and strategy_options:DC2=2
On Tue, Jul 5, 2011 at 10:09 AM, A J wrote:
> cqlsh> CREATE KEYSPACE twissandra with
> ... strategy_class =
> ... 'org.apache.cassandra.locator.NetworkTopologyStrategy'
> ... and strategy_options=[{DC1:
Thanks. That worked.
On Tue, Jul 5, 2011 at 11:35 AM, Jonathan Ellis wrote:
> replace the s_o line with
>
> and strategy_options:DC1=1 and strategy_options:DC2=2
>
> On Tue, Jul 5, 2011 at 10:09 AM, A J wrote:
>> cqlsh> CREATE KEYSPACE twissandra with
>> ... strategy_class =
>> ... 'o
i use get_range_slice to get the list of keys,
then i call client.remove(keyspace, key, columnFamily, timestamp,
ConsistencyLevel.ALL);
to delete the record
but i still have the keys.
why?
can i do it otherwise?
Hi all,
If you're London based please come along to the Cassandra user group. This
month we're going to be looking at how Cassandra compares with some other
solutions (Riak and Mongo). This will be particularly interesting for anyone
who is still at an early stage and wants to get more of an idea
Hello,
Let me explain what I am trying to do:
I am prototyping 2 Data centers (DC1 and DC2) with two nodes each. Say
DC1_n1 and DC1_n2 nodes in DC1 and DC2_n1 and DC2_n2 in DC2.
With PropertyFileSnitch and NetworkTopologyStrategy and
'strategy_options of DC1=1 and DC2=1', I am able to ensure that e
Hi experts,
Are there any benchmarks that quantify how long nodetool repair takes?
Something which says on this kind of hardware, with this much of data,
nodetool repair takes this long. The other question that I have is since
Cassandra recommends running nodetool repair within GCGracePeriodSe
On Tue, Jul 5, 2011 at 1:27 PM, Raj N wrote:
> Hi experts,
> Are there any benchmarks that quantify how long nodetool repair takes?
> Something which says on this kind of hardware, with this much of data,
> nodetool repair takes this long. The other question that I have is since
> Cassandra
AJ,
You can use offset mirror tokens to achieve this. Pick your initial
tokens for DC1N1 and DC1N2 as if they were the only nodes in your
cluster. Now increment each by 1 and use them as the tokens for DC2N1
and DC2N2. This will give you a complete keyspace within each data
center with even di
I know it doesn't. But is this a valid enhancement request?
On Tue, Jul 5, 2011 at 1:32 PM, Edward Capriolo wrote:
>
>
> On Tue, Jul 5, 2011 at 1:27 PM, Raj N wrote:
>
>> Hi experts,
>> Are there any benchmarks that quantify how long nodetool repair
>> takes? Something which says on this ki
Perfect ! Thanks.
On Tue, Jul 5, 2011 at 1:51 PM, Eric tamme wrote:
> AJ,
>
> You can use offset mirror tokens to achieve this. Pick your initial
> tokens for DC1N1 and DC1N2 as if they were the only nodes in your
> cluster. Now increment each by 1 and use them as the tokens for DC2N1
> and DC
On Fri, Jul 1, 2011 at 3:16 AM, Sylvain Lebresne wrote:
> To make it clear what the problem is, this is not a repair problem. This is
> a gossip problem. Gossip is reporting that the remote node is a 0.7 node
> and repair is just saying "I cannot use that node because repair has changed
> and the
Hello,
Where can I find details of nodetool move. Most places just mention
that 'move the target node to a given Token. Moving is essentially a
convenience over decommission + bootstrap.'
Stuff like, when do I need to do and on what nodes? What is the value
of 'new token' to be provided ? What hap
All:
For a rough rule of thumb, Cassandra's internal datastructures will
require about memtable_throughput_in_mb * 3 * number of hot CFs + 1G +
internal caches.
Why cassandra need so much memory? What is the 1G memory used for?
Best Regards
Donna li
28 matches
Mail list logo