> - A Cassandra node (say 3) goes down (even with 24 GB of ram, OOM errors
> are the bain of my existence)
Following up on this bit; OOM should not be the status quo. Have you
tweaked JVM heap sizes to reflect your memtables sizes etc?
http://wiki.apache.org/cassandra/MemtableThresholds
--
/ P
> As far as I remember, cassandra had been causing problems when there was an
> IP change back in version 0.6?
I know you already proceeded, but FWIW I think the complication with
IP addresses are limited to changing the address of a node. It sounded
like you were going to add 4 nodes and them sim
thx for the answers!
2010/12/3 Jonathan Ellis
> http://www.riptano.com/blog/whats-new-cassandra-07-secondary-indexes
>
>
> On Thu, Dec 2, 2010 at 9:34 AM, Yann Perchec, Novapost <
> yann.perc...@novapost.fr> wrote:
>
>> Hello everybody,
>>
>> I'm playing since a couple of days with the cassandra
> In our insert-tests the average heap usage is slowly growing up to the 3 GB
> border (jconsole monitor over 50 min http://oi51.tinypic.com/k12gzd.jpg) and
> the CompactionManger queue is also constantly growing up to about 50 jobs
> pending.
Since you're obviously bottlenecking on compaction; ar
thank you.
but I mean the probability of a node to receive the request not process it
eventually .
At 2010-12-06 00:56:58,"Brandon Williams" wrote:
2010/12/5 魏金仙
If a particular client send 5 requests to a 6-node cluster, then the
probability of each node receiving(not be responsible for) t
so when will index files be in the memory?
At 2010-12-06 00:54:48,"Brandon Williams" wrote:
2010/12/5 魏金仙
for each sstable, there is an index file, which is loaded in memory to locate a
particular key's offset efficiently.
Index files are not held in memory.
and for each CF, KeysCached ca
After one seed node crash, I want to add one node as seed node, I set
*auto_bootstrap
to true, but the new node don't *migrate data from other nodes.
How can I add one new seed node and let the node to * *migrate data from
other nodes?
Thanks,
LiuLei
2010/12/6 魏金仙 :
> so when will index files be in the memory?
The index files are never fully in memory (because it would quickly be
too big).
Hence, only a sample of this file is in memory (1 every 128 entry by
default). When
cassandra needs to know where a (row) key is on disk (for a given
SStabl
I hava a two node , running cassandra ,both's memory is 4G.
First i set heap size to 2G ,both run normal .
The i set heap size to 1G , the client who insert data to and read data
from cassandra began throw Read\Write Unavailable Exception . And one
cassandra node began logging GC for ConcurrentM
And after checking node who gc concurrentMarkSweep frequently ,it's OC(Current
old space capacity (KB)) is 1006016.0 ,it's OU(Old space utilization (KB).)
is also 1006016.0 ,almost all memory.
Dose this situation imply this heap size is set too low?
On Mon, Dec 6, 2010 at 8:07 PM, Ying Tang wrot
If it's GCing frequently and each CMS is only collecting a small
fraction of the old gen, then your heap is probably too small.
(GCInspector only logs collections that take over 1s, which should
never include ParNew.)
On Mon, Dec 6, 2010 at 7:11 AM, Ying Tang wrote:
> And after checking node who
You're right, they should be the same.
Next time this happens, set the log level to debug (from
StorageService jmx) on the surviving nodes and let a couple queries
fail, before restarting the 3rd (and setting level back to info).
On Sat, Dec 4, 2010 at 12:01 AM, Dan Hendry wrote:
> Doesn't consi
set it as a seed _after_ bootstrapping it into the cluster.
On Mon, Dec 6, 2010 at 5:01 AM, lei liu wrote:
> After one seed node crash, I want to add one node as seed node, I set
> auto_bootstrap to true, but the new node don't migrate data from other
> nodes.
>
> How can I add one new seed node
Thank Jonathan for your reply.
How can I bootstrap the node into cluster, I know if the node is seed node,
I can't set AutoBootstrap to true.
2010/12/6 Jonathan Ellis
> set it as a seed _after_ bootstrapping it into the cluster.
>
> On Mon, Dec 6, 2010 at t5:01 AM, lei liu wrote:
> > After on
Hi, I've the following schema defined:
EventsByUserDate : {
UserId : {
epoch: { // SC
IID,
IID,
IID,
IID
},
// and the other events in time
epoch: {
IID,
IID,
IID
}
}
}
Where I'm expecting to store all the event ids for a user ordered by date
(it's seconds since epoch as long long), I'm usin
> bleeding edge code you are running (did you try rc1?) or you do have nodes
on different versions
All nodes are running code from
https://svn.apache.org/repos/asf/cassandra/branches/cassandra-0.7 which I
thought was essentially RC1 with fixes but I will give the actual release a
try.
> you have
The node can be set as a seed node at any time. It does not need to be a
seed node when it joins the cluster. You should remove it as a seed node,
set autobootstrap to true and let it join the cluster. Once it has joined
the cluster you should add it as a seed node in the configuration for all of
y
What client are you using? Is it storing the results in a hash map or some
other type of
non-order preserving dictionary?
- Tyler
On Mon, Dec 6, 2010 at 10:11 AM, Guillermo Winkler wrote:
> Hi, I've the following schema defined:
>
> EventsByUserDate : {
> UserId : {
> epoch: { // SC
> IID,
>
There are a few seats open for each:
LA training Wednesday: http://www.eventbrite.com/event/1002369113
Tokyo training Thursday: http://nosqlfd.eventbrite.com/
I will be teaching the LA class. Tokyo will be taught by Nate McCall
(with pauseless translation to Japanese) and hosted by our friends a
2010/12/6 魏金仙
> thank you.
> but I mean the probability of a node to receive the request not process it
> eventually .
I see. That depends on how the client-side load balancing is written.
-Brandon
We've also got Jake Luciani (@tjake) giving a talk at Cassandra London this
Wednesday - this is a great opportunity to meet with other Cassandra users.
There will be some free beer and food available.
http://www.meetup.com/Cassandra-London/calendar/15351291/
Dave
On 6 December 2010 17:05, Jonath
I'm using thrift in C++ and inserting the results in a vector of pairs, so
client-side-mangling does not seem to be the problem.
Also I'm using a "test" column where I insert the same value I'm using as
super column name (in this case the same date converted to string) and when
queried using cassa
We're going to be hosting people at the Twitter offices the evening of
December 13th to focus on testing 0.7. If you're interested please
contact me offlist and I'll add you to the invite. Note that we're
trying to keep the group small and focused.
-ryan
How are you packing the longs into strings? The large negative numbers
point to that being done incorrectly.
Bitshifting and putting each byte of the long into a char[8] then
stringifying the char[] is the best way to go. Cassandra expects
big-ending longs, as well.
- Tyler
On Mon, Dec 6, 2010
That should be "big-endian".
On Mon, Dec 6, 2010 at 12:29 PM, Tyler Hobbs wrote:
> How are you packing the longs into strings? The large negative numbers
> point to that being done incorrectly.
>
> Bitshifting and putting each byte of the long into a char[8] then
> stringifying the char[] is th
Also, thought I should mention:
When you make a std::string out of the char[], make sure to use the
constructor with the size_t parameter (size 8).
- Tyler
On Mon, Dec 6, 2010 at 12:29 PM, Tyler Hobbs wrote:
> That should be "big-endian".
>
>
> On Mon, Dec 6, 2010 at 12:29 PM, Tyler Hobbs wro
+1
I'm doing this in my C++ client so contact me offlist if you need code
David
Sent from my iPhone
On Dec 6, 2010, at 1:33 PM, Tyler Hobbs wrote:
> Also, thought I should mention:
>
> When you make a std::string out of the char[], make sure to use the
> constructor with the size_t parameter
Hi I'm trying to create a connection to a server running cassandra doing this:
compass = Cassandra.new('Compas', servers="223.798.456.123:9160")
But once I try to get some data I realize that there's no connection, any
ideas?? I'm I missing something ?
Thanks
uh, ok I was just copying :P
string result;
result.resize(sizeof(long long));
memcpy(&result[0], &l, sizeof(long long));
I'll try and let you know
many thanks!
On Mon, Dec 6, 2010 at 4:29 PM, Tyler Hobbs wrote:
> How are you packing the longs into strings? The large negative numbers
What function are you calling to get data and what is the error ?Try calling a function like keyspaces(), it should return a list of the keyspaces in your cluster and is a good way to test things are connected.If there is still no joy check you can connect to your cluster using the cassandra-cli c
On Thu, 2010-12-02 at 10:30 -0800, Clint Byrum wrote:
> On Wed, 2010-12-01 at 17:00 +0100, Olivier Rosello wrote:
> > > FYI, 0.7.0~rc1 debs are available in a new PPA for experimental
> > > releases:
> > >
> > > http://launchpad.net/~cassandra-ubuntu/+archive/experimental
> > >
> >
> > It seems
I've tried the keyspaces() function and got this on return:
compass.keyspaces()
CassandraThrift::Cassandra::Client::TransportException:
CassandraThrift::Cassandra::Client::TransportException
from
/home/compass/.rvm/gems/ruby-1.9.2...@rails3/gems/thrift-0.2.0.4/lib/thrift/transport/socke
You can run the cassandra-cli from any machine. If you run it from the same machine as your ruby code it's a reliable way to check you can connect to the cluster. ok, next set of questions- what version of cassandra are you using ? Is it 0.7?- what require did you run ? was it require 'cassandr
Accidentally sent to me.Begin forwarded message:From: Max Date: 07 December 2010 6:00:36 AMTo: Aaron Morton Subject: Re: Re: Re: Cassandra 0.7 beta 3 outOfMemory (OOM)Thank you both for your answer!
After several tests with different parameters we came to the
conclusion that it must be a bug.
It
Hi I've successfully managed to connect to the server through the cassandra-cli
command but still no luck on doing it from Fauna, I'm running cassandra 0.6.8
and I did the usual require 'cassandra'
I've changed the ThriftAddress on the storage-conf.xml to the IP address of the
server itself, do
Jake or anyone else got experience bulk loading into Lucandra ? Or does anyone have experience with JRocket ? Max, are you sending one document at a time into lucene. Can you send them in batches (like solr), if so does it reduce the amount of requests going to cassandra? Also, cassandra.bat is con
o now it's behaving :)
#define ntohll(x) (((_int64)(ntohl((int)((x << 32) >> 32))) << 32) |
(unsigned int)ntohl(((int)(x >> 32
string result;
result.resize(sizeof(long long));
long long bigendian = htonll(l);
memcpy(&result[0], &bigendian, sizeof(long long));
=> (super_column=1291668233,
Can we get an update? After reading through the comments on 1072, it looks
like this is getting close to finished, but it's hard for someone not knee-deep
in the project to tell. I'm primarily interested in the timeline you foresee
for getting the increment support into trunk for 0.7, and some
It would help if you give us more context. The code snippet you've
given us is incomplete and not very helpful.
-ryan
On Mon, Dec 6, 2010 at 12:33 PM, Alberto Velandia
wrote:
> Hi I've successfully managed to connect to the server through the
> cassandra-cli command but still no luck on doing it
I've found the solution, thanks for the help, I needed to change the addresses
on the storage-conf.xml both ListenAddress and ThriftAddress to the address of
the server itself. Sorry about the snippet being incomplete btw
On Dec 6, 2010, at 4:18 PM, Ryan King wrote:
> It would help if you giv
How is pagination accomplished when you dont know a start key? For
example, how can I "jump" to page 10?
You are right, the end is in sight for 1072 to be committed to trunk.
It won't be documented for end-users or committed to 0.7 branch until
we fix the drawbacks elaborated on the ticket, because that fixing
won't be backwards-compatible. And at that point we'll probably be
close to the next major
Short answer: that's a bad idea; don't do it.
Long answer: you could count 10 pages of results and jump there
manually, which is what "offset 10 * page_size" is doing for you under
the hood, but that gets slow quickly as your offset grows. Which is
why you shouldn't do it with a SQL db either.
O
I'm running a big test -- ten nodes with 3T disk each. I'm using
0.7.0rc1. After some tuning help (thanks Tyler) lots of this is working
as it should. However a serious event occurred as well -- the server
froze up -- and though mutations were dropped, no error was reported to
the client. Here'
Thanks Nick.
After I add the new node as seed node in the configuration for all of my
nodes, do I need to restart all of my nodes?
2010/12/7 Nick Bailey
> The node can be set as a seed node at any time. It does not need to be a
> seed node when it joins the cluster. You should remove it as a se
45 matches
Mail list logo