Hi,
Say I have two processes on separate machines, and a Cassandra cluster over
several machines. If the first process writes (insert) to a column while the
second process reads (get / get_slice / get_range_slices / others?) from that
column (say the consistency level is QUORUM if that makes a
Hello everybody,
I actually have the exact same problem. I have very little amount of data (
few hundred kb) and the memory consumption goes up without any end. in
sight. For
On my node I have limited ram ( 2 Gb) to run cassandra, but since I have
very little data, I fought it was not a problem, h
Here is a typo, sorry...
best regards,
hanzhu
On Sun, Dec 19, 2010 at 10:29 AM, Zhu Han wrote:
> The problem seems still like the C-heap of JVM, which leaks 70MB every day.
> Here is the summary:
>
> on 12/19: 010c3000 178548K rw---[ anon ]
> on 12/18: 010c3000 110320K rw--
The problem seems still like the C-heap of JVM, which leaks 70MB every day.
Here is the summary:
on 12/19: 010c3000 178548K rw---[ anon ]
on 12/18: 010c3000 110320K rw---[ anon ]
on 12/17: 010c3000 39256K rw---[ anon ]
This should not be the JVM object heap, b
You can disable compaction and enable it later. Use nodetool and
setcompactionthreshold to 0 0
-Chris
On Dec 18, 2010, at 6:05 PM, Wayne wrote:
> Rereading through everything again I am starting to wonder if the page cache
> is being affected by compaction. We have been heavily loading data fo
Rereading through everything again I am starting to wonder if the page cache
is being affected by compaction. We have been heavily loading data for weeks
and compaction is basically running non-stop. The manual compaction should
be done some time tomorrow, so when totally caught up I will try again
I guess my phpcassa is not able to connect with the cassandra
I havenot done any modifications to phpcassa folder I downloaded from github..
but my cassandra is running fine when I run it through Command prompt.
On Sun, Dec 19, 2010 at 2:29 AM, Rajkumar Gupta wrote:
> hi
> I am using Cassand
hi
I am using Cassandra 0.7.0 on windows. I am trying to use thobbs's
PHPcassa with it but when I try this:
require_once('Z:/wamp/bin/php/'.'phpcassa/connection.php');
require_once('Z:/wamp/bin/php/'.'phpcassa/columnfamily.php');
$conn = new Connection('Keyspace');
$column_family = new ColumnFamil
Hi, I am trying to use phpcassa(Hoan's) with Cassandra 0.6.8 but when
I try to run the following php script that includes phpcassa,
insert('1', array('email' => 'hoan.tont...@gmail.com',
'password' => 'test'));
?>
on running above script I get this error:
Fatal error: Uncaught exception 'Excep
> You are absolutely back to my main concern. Initially we were consistently
> seeing < 10ms read latency and now we see 25ms (30GB sstable file), 50ms
> (100GB sstable file) and 65ms (330GB table file) read times for a single
> read with nothing else going on in the cluster. Concurrency is not our
You are absolutely back to my main concern. Initially we were consistently
seeing < 10ms read latency and now we see 25ms (30GB sstable file), 50ms
(100GB sstable file) and 65ms (330GB table file) read times for a single
read with nothing else going on in the cluster. Concurrency is not our
problem
> Smaller nodes just seem to fit the Cassandra architecture a lot better. We
> can not use cloud instances, so the cost for us to go to <500gb nodes is
> prohibitive. Cassandra lumps all processes on the node together into one
> bucket, and that almost then requires a smaller node data set. There a
We are using XFS for the data volume. We are load testing now, and
compaction is way behind but weekly manual compaction should help catch
things up.
Smaller nodes just seem to fit the Cassandra architecture a lot better. We
can not use cloud instances, so the cost for us to go to <500gb nodes is
On Sat, Dec 18, 2010 at 11:31 AM, Peter Schuller
wrote:
> I started a page on the wiki that still needs improvement,
> specifically for concerns relating to running large nodes:
>
> http://wiki.apache.org/cassandra/LargeDataSetConsiderations
>
> I haven't linked to it from anywhere yet, pending
Curious if anyone has done input from a cassandra super column? Any support
for this currently? Thanks
I started a page on the wiki that still needs improvement,
specifically for concerns relating to running large nodes:
http://wiki.apache.org/cassandra/LargeDataSetConsiderations
I haven't linked to it from anywhere yet, pending adding various JIRA
ticket references + give people a chance to ob
> +1 on each of Peter's points except one.
>
> For example, if the hot set is very small and slowly changing, you may
> be able to have 100 TB per node and take the traffic without any
> difficulties.
So that statement was probably not the best. I should have been more
careful. I meant it purely i
On Sat, Dec 18, 2010 at 5:27 AM, Peter Schuller
wrote:
> And I forgot:
>
> (6) It is fully expected that sstable counts spike during large
> compactions that take a lot of time simply because smaller compactions
> never get a chance to run. (There was just recently JIRA traffic that
> added suppor
I have a pre-production cluster with few data and similar problem...
PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
12916 0.0 80.0 5972756 3231120 ? Sl Oct18 15:49 /usr/bin/java -ea
-Xms1G -Xmx1G -XX:+UseParNewGC ...
Data dir:
2.2Mdata/
Att,
Daniel Korndorfer
Telecom
And I forgot:
(6) It is fully expected that sstable counts spike during large
compactions that take a lot of time simply because smaller compactions
never get a chance to run. (There was just recently JIRA traffic that
added support for parallel compaction, but I'm not sure whether it
fully addres
> How many nodes? 10 - 16 cores each (2 x quad ht cpus)
> How much ram per node? 24gb
> What disks and how many? SATA 7200rpm 1x1tb for commit log, 4x1tb (raid0)
> for data
> Is your ring balanced? yes, random partitioned very evenly
> How many column families? 4 CFs x 3 Keyspaces
> How much ram is
22 matches
Mail list logo