Using Cassandra 2.0.3, I'm seeing repairs hanging on all nodes on our
cluster. On the node running the repair, the error is:
Caused by: org.apache.cassandra.exceptions.RepairException: [repair
#8d77b2a0-670b-11e3-949c-6176e8469a51 on OpsCenter/rollups300,
(251968055115794,2549561992225510945]]
Hi,
I've read some articles about Cassandra and I noticed an opinion that
Thrift protocol
has some flaws. Thrift should go away in the nearest futures.
But I cannot find any reference answering the question why is it so bad?
--
Daneel S. Yaitskov
That's just mis-information by people that don't understand thrift.
The thrift drivers are still much more mature than the java drivers right
now. DataStax has stated on multiple occasions thrift isn't going any
where. CQL is fine if people only want to use SQL-like language. Search the
cassandra
Hopefully this is a valid clarification, rather than a hijack of your
thread!
How does the binary protocol fit into this? I have not used it but was
told you can implement CQL calls via thrift or via the binary protocol. Is
the binary protocol superior to thrift?
If you use the binary protocol
thrift is binary protocol, it just happens to support multiple platforms.
DataStax Java and C# drivers are also binary, but they are optimized for
CQL. For example, the DataStax C# drivers use google protocol buffers,
which is a different binary protocol.
the documentation on this stuff hasn't bee
Hi all,
we are reimplementing a legacy interface of an inventory-like service
(currently built on top of mysql) on Cassandra and I thought I would share
some findings with the list. The interface semantics is given and cannot be
changed. We chose Cassandra due to its multiple datacenter capabiliti
Based on an question I posted a while back I got the following answer (for
something unrelated to this):
When we speak of "binary protocol", we talk about the protocol introduced
> in Cassandra 1.2 that is an alternative to thrift for CQL3. It's a custom,
> binary, protocol, that has not link to t
Peter - I missed your comment regarding optimised for CQL (I was distracted
by the statement of thift is binary protocol - As I got 'corrected' for a
similar statement to that in one of my previous posts). So comparing
thrift to the 'newer' ninary protocols it sounds like the only real benefit
is
that is correct from studying DataStax java and C# driver.
On Tue, Dec 17, 2013 at 10:22 AM, Stuart Broad wrote:
> Peter - I missed your comment regarding optimised for CQL (I was
> distracted by the statement of thift is binary protocol - As I got
> 'corrected' for a similar statement to that
Hi,
I'm trying to decommission a node from a six node Cassandra 2.0.3 cluster,
following the instructions at
http://www.datastax.com/documentation/cassandra/2.0/webhelp/index.html#cassandra/operations/ops_remove_node_t.html.
The node has just short of 11GB data in one keyspace (RF=3), and the repa
On Fri, Dec 13, 2013 at 2:25 PM, Kumar Ranjan wrote:
> Results: {u'narrativebuddieswin': ['609548930995445799_752368319',
> '609549303525138481_752368319', '610162034020180814_752368319',
> '610162805856002905_752368319', '610163571417146213_752368319',
> '610165900312830861_752368319']}
>
> none
On Tue, Dec 17, 2013 at 8:00 AM, Joel Segerlind wrote:
> Hi,
>
> I'm trying to decommission a node from a six node Cassandra 2.0.3 cluster,
> following the instructions at
> http://www.datastax.com/documentation/cassandra/2.0/webhelp/index.html#cassandra/operations/ops_remove_node_t.html
>
...
On Mon, Dec 16, 2013 at 2:55 AM, Bonnet Jonathan. <
jonathan.bon...@externe.bnpparibas.com> wrote:
> The link is not allowed by my company where i Work, so nobody never manage
> to make work a restore with archive commit log ? Strange to me to let this
> option in all new releases of cassandra if
On Tue, Dec 17, 2013 at 4:32 AM, Russ Garrett wrote:
> This seems to be the same error as CASSANDRA-6210, however there are
> no nodes bootstrapping (I haven't added any nodes since starting the
> cluster up).
>
I would comment to that effect on CASSANDRA-6210, were I you.
Are you using vnodes?
On 17 December 2013 20:40, Robert Coli wrote:
> On Tue, Dec 17, 2013 at 8:00 AM, Joel Segerlind wrote:
>
>> Hi,
>>
>> I'm trying to decommission a node from a six node Cassandra 2.0.3
>> cluster, following the instructions at
>> http://www.datastax.com/documentation/cassandra/2.0/webhelp/index.h
On Tue, Dec 17, 2013 at 11:46 AM, Joel Segerlind wrote:
> Decommission streams a nodes key to the new node responsible for the
>> range, why would a repair be required beforehand?
>>
>
> I wondered the same thing, but I thought that the one writing that ought
> to know way better than me.
>
Not
Question. I have a 5 node cluster (local with ccm). A keyspace with rf: 3.
Three nodes are down. I run "nodetool ring" in the two living nodes and
both see the other three nodes down.
Then i do an insert with cs quorum and get an UnavailableException. It's ok.
I am using Datastax java driver v 2.
Also you are going to encounter code that uses thirft/hector thrift/asyanax
and if you work on a codebase that was designed before CQL you still need
to support it. There are some concepts people have employed in those tools
like VirtualKeyspaces etc that have not made their way into CQL.
On Tue,
For me, it's best to now both and use both where each is strong. that way
you get the most out of cassandra.
I am bias in favor of thrift, since I've been contributing to hector and
ported hector to C# over the summer.
On Tue, Dec 17, 2013 at 4:20 PM, Edward Capriolo wrote:
> Also you are going
On 17 December 2013 20:56, Robert Coli wrote:
> On Tue, Dec 17, 2013 at 11:46 AM, Joel Segerlind wrote:
>
>> Decommission streams a nodes key to the new node responsible for the
>>> range, why would a repair be required beforehand?
>>>
>>
>> I wondered the same thing, but I thought that the one
Realize that there will be more and more new features that come along as
cassandra matures. It is an overwhelming certainty that these feature will be
available thru the new native interface & CQL. The same level of certainty
can't be given to Thrift. Certainly if you have existing applications
wouldn't that go against what DataStax has publicly stated about support
for Thrift?
So far, DataStax has been good about keeping the Thrift API up to date. I'm
inclined to trust DataStax mean what they say. If DataStax doesn't have the
man power to keep Thrift up to speed with new features, I'm w
On Tue, Dec 17, 2013 at 1:52 PM, Dave Brosius wrote:
> Realize that there will be more and more new features that come along as
> cassandra matures. It is an overwhelming certainty that these feature will
> be available thru the new native interface & CQL. The same level of
> certainty can't be gi
Hello everyone,
I was using astyanax connection pool defined as this:
ipSeeds = "LOAD_BALANCER_HOST:9160";
conPool.setSeeds(ipSeeds)
.setDiscoveryType(NodeDiscoveryType.TOKEN_AWARE)
.setConnectionPoolType(ConnectionPoolType.TOKEN_AWARE);
However, my cluster have 4 nodes and I have 8 client machi
> Could anybody suggest me how do I achieve it in Cassandra.
It’s not supported.
You may want to model the feeschedule as a table.
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 12/1
> Node: 4 CPU, 6 GB RAM, virtual appliance
>
> Cassandra: 3 GB Heap, vnodes 256
FWIW that’s a very low powered node.
> Maybe we forgot necessary actions during or after cluster expanding process.
> We are open for every idea.
Where the nodes in the seed list when they joined the cluster? If so
CQL3 and thrift do not support an offset clause, so you can only really support
next / prev page calls to the database.
> I am trying to use xget with column_count and buffer_size parameters. Can
> someone explain me, how does it work? From doc, my understanding is that, I
> can do something l
> 'twitter_row_key': OrderedDict([('411186035495010304', u'{"score": 0, "tid":
> 411186035495010304, "created_at": "Thu Dec 12 17:29:24 + 2013",
> "favorite": 0, "retweet": 0, "approved": "true"}'),])
>
> How can I set approved to 'false' ??
>
>
It looks like the value of the 4111860354950
> Is it possible to get all the data for last 5 seconds or 10 seconds or 30
> seconds by using the id column?
Not using the current table.
Try this
CREATE TABLE test1 (
day integer,
timestamp integer,
count integer,
record_nam
> With a single node I get 3K for cassandra 1.0.12 and 1.2.12. So I suspect
> there is some network chatter. I have started looking at the sources, hoping
> to find something.
1.2 is pretty stable, I doubt there is anything in there that makes it run
slower than 1.0. It’s probably something in y
We are seeking to replace Acunu in our technology stack / platform. It is
the only component in our stack that is not open source.
In preparation, over the last few weeks I’ve migrated Virgil to CQL. The
vision is that Virgil could receive a REST request to upsert/delete data
(hierarchical JSON
> Request did not complete within rpc_timeout.
The node is overloaded and did not return in time.
Check the logs for errors or excessive JVM GC and try selecting less data.
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consul
> rite throughput is remaining at around 460 pkts/sec or sometimes even falling
> below that rate as against the expected rate of around 920 pkts/sec. Is it
> some kind of limitation of Cassandra or am I doing something wrong???
There is nothing in cassandra that would make that happen. Double c
Try using jstack to see if there are a lot of threads there.
Are you using vNodea and Hadoop ?
https://issues.apache.org/jira/browse/CASSANDRA-6169
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelas
-tmp- files will sit in the data dir, if there was an error creating them
during compaction or flushing to disk they will sit around until a restart.
Check the logs for errors to see if compaction was failing on something.
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Foun
> * select id from table where token(id) > token(some_value) and
> secondary_index = other_val limit 2 allow filtering;
>
> Filtering absolutely kills the performance. On a table populated with 130.000
> records, single node Cassandra server (on my i7 notebook, 2GB of JVM heap)
> and secondary
Thanks for the reply. By packet drops I mean, the client is not able to
read the shared memory as fast as the software switch is writing into it..
I doubt its the issue with the client but can you in particular
issues that could cause this type of scenario?
Also, I would like to know if i
37 matches
Mail list logo