Hi all,
We have a 6-node cassandra cluster which has worked fine for a long
time through upgrades starting from 0.8.x to 1.1.x. Recently we
upgraded to 1.2.2, and since then streaming repair doesn't work
anymore (everything else works, gossip, serving Thrift queries etc.).
We upgraded to 1.2.3, up
I always run Cassandra (from 0.9 to latest 1.2.3) on Win7 machine without any
issue with default configuration settings. The only thing I've changed in
cassandra.yaml is cluster_name. I also recommend to add an additional JVM flag
-Dfile.encoding=UTF-8 in cassandra.bat and cassandra-cli.bat and
nodetool cleanup took about 23.5 hours on each node(did this in parallel).
started the nodetool cleanup 20:53 on March 22 and it's still running
(10:08 25 March)
The RF = 3. The load on each node is 490 GB, 491 GB, 323GB, 476GB
I think I read some that removenode is faster the more nodes there ar
Hi Tyler,
I must have sounded like a complete dummy. I read your reply from my cell phone
and did not understand it immediately.
I just noticed that I had made mistakes in the seed provider ipaddresses.
Thanks Hans-peter
From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: donderdag 21 maa
Hi,
I'm using cassandra 1.2.3.
I've successfully clustered 3 machines and created a keyspace with replication
factor 3.
Node1 seeds Node2
Node2 seeds Node1
Node3 seeds Node1
I insert an entry using node1.
Using cqlsh from another node, I try to delete the item by sending out the
delete comma
Hi,
I've deleted a range of keys in my one node test-cluster and want to re-add
them with an older creation time. How can I make sure all tombstones are
gone so that they can be re-added properly? I've tried nodetool compact but
it seems some tombstones remain.
Best regards,
Joel Samuelsson
Thanks again!
Nodetool gossipinfo correctly lists the existing nodes only
- last change to ring topology was months ago
- I started the problem node with -Dcassanda.load_ring_state=false and
observed no unusual behaviour ( this is with hinted handoff OFF. With
hinted handoff ON I see the same beh
I am currently running 4 nodes, @ 1.2.2.
I was curious if it mattered what node I have my clients connect to. Using
the python cql driver :
https://pypi.python.org/pypi/cql/
It doesn't give me the option to specify multiple client addresses, just
one. Will this be an issue?
My assumption is that
What is the consistence level of your read and write operations?
On Mon, Mar 25, 2013 at 8:39 AM, Byron Wang wrote:
> Hi,
>
> I'm using cassandra 1.2.3.
>
> I've successfully clustered 3 machines and created a keyspace with
> replication factor 3.
>
> Node1 seeds Node2
> Node2 seeds Node1
> Node
Yes it does. Thank you Aaron.
Now I realized that the system keyspace uses string as keys, like "Ring" or
"ClusterName", and I don't know how to convert these type of keys into
UUID. Any idea?
Carlos Pérez Miguel
2013/3/25 aaron morton
> The best thing to do is start with a look at ByteOrder
Hello!
I got stuck when trying to query CQL3 collections from Java.
I'm using Cassandra 1.2.3 with CQL3. I created a column family to store a
property graph's edges with the following command:
CREATE TABLE vertices (
id text PRIMARY KEY,
properties map
)
I can access the data from the cqlsh,
No one else concerned by the fact that we must define the column families the
old way to access it with Pig ?
Is there a way to have the column family defined the new way in a DC and the
old way (WITH COMPACT STORAGE) in another DC ?
Thanks
--
Cyril SCETBON
Expert bases de données
Humanlog pou
I would advise you not to use raw thrift. It's just a low-level transport
as far as CQL3 is concerned, and what you will get is binary encoded data
that you will have to decode manually. Use a client library (like
https://github.com/datastax/java-driver) that will do that for you.
Though to answer
Last week I attended DataStax's NYC* conference and one of the give-aways
was a wooden USB stick. Finally getting around to loading it I find it
empty.
Anyone else have this problem? Are the conference presentations available
somewhere else?
Brian Tarbox
I think the recorded sessions will be posted to the PlanetCassandra Youtube
channel:
http://www.planetcassandra.org/blog/post/nyc-big-data-tech-day-update
Some of the slides have been posted up to slideshare:
http://www.slideshare.net/boneill42/hms-nyc-talk
http://www.slideshare.net/edwardcapriolo
Can someone explain how to read the cfhistograms o/p ?
[root@db4 ~]# nodetool cfhistograms usertable data
usertable/data histograms
Offset SSTables Write Latency Read Latency Row Size
Column Count
12857444 4051 0
Hi all,
I am still unable to move forward with this issue.
- when I switch SSL off in inter-DC communication, nodetool rebuild works well
- when I switch internode_compression off, I still get
java.io.IOException: FAILED_TO_UNCOMPRESS exception. Does
internode_compression: none really switch off
I think we all go through this learning curve. Here is the answer I gave
last time this question was asked:
The output of this command seems to make no sense unless I think of it as 5
completely separate histograms that just happen to be displayed together.
Using this example output should I rea
Thanks much,
I wanted to confirm. We will do this at the application level.
FR
On Sun, Mar 24, 2013 at 10:03 AM, aaron morton wrote:
> From this mailing list I found this Github project that is doing something
> similar by looking at the commit logs:
> https://github.com/carloscm/cassandra-co
When the coordinator node receives a batch_mutate with different N row keys
(for different CF) :
a) does it treat them as N independent requests to replicas, or
b) does the coordinator node split the the initial batch_mutate into M
batch_mutate (M <= N) according to rowkeys ?
Thanks,
Dominique
This also has a good description of how to interpret the results
http://thelastpickle.com/2011/04/28/Forces-of-Write-and-Read/
On 25 March 2013 16:36, Brian Tarbox wrote:
> I think we all go through this learning curve. Here is the answer I gave
> last time this question was asked:
>
> The ou
Big thanks to everyone for the suggestions. Turned out to be a good
opportunity for me to learn more about C* internals and add some extra
debug output to the code to validate some of my assumptions.
Aaron, very good points on the problems with doing the reversed queries. I
read your blog posts
The sticks were given to me in a bag at 9:00am in the morning. I would be
highly impressed if datastax found a way to retroactively put my 2:00PM
presentation on that USB stick.
http://www.imdb.com/title/tt0088763/
:)
On Mon, Mar 25, 2013 at 11:49 AM, Brian O'Neill wrote:
> I think the recorded
Hello list,
Could anyone shed some light on how an FP chance of 0.01 coexist with a
measured FP ratio of .. 0.98 ? Am I reading this wrong or are 98% of the
requests hitting the bloom filter create a false positive while the "target"
false ratio is 0.01?
( Also key cache hit ratio is around 0.0
What I don't understand hete is "Row Size" column. Why is it always 0?
Thank you,
Andrey
On Mon, Mar 25, 2013 at 9:36 AM, Brian Tarbox wrote:
> I think we all go through this learning curve. Here is the answer I gave
> last time this question was asked:
>
> The output of this command seems
On Mon, Mar 25, 2013 at 10:36 AM, Brian Tarbox wrote:
> I think we all go through this learning curve. Here is the answer I gave
> last time this question was asked:
>
> The output of this command seems to make no sense unless I think of it as
> 5 completely separate histograms that just happen t
Hi - Quick question. Do hints contain the actual data or the data is read from
the SStables and then sent to the other node when it comes up ?
Thanks,
Kanwar
Thanks, this client library is really great! It should be included in the
Apache Cassandra Wiki's "High level clients" page (
http://wiki.apache.org/cassandra/ClientOptions).
Anyway, here is the code I created (using the cassandra-driver-core Maven
artifact).
Cluster cluster = Cluster.builder().ad
You'll need to temporarily lower gc_grace_seconds for that column family,
run compaction, and then restore gc_grace_seconds to its original value.
See http://wiki.apache.org/cassandra/DistributedDeletes for more info.
On Mon, Mar 25, 2013 at 7:40 AM, Joel Samuelsson
wrote:
> Hi,
>
> I've deleted
I am using Cassandra 1.1.5.
nodetool repair is not coming back on the command line. Did it ran
successfully? Did it hang? How do you find if the repair was successful?I did
not find anything in the logs."nodetool compactionstats" and "nodetool
netstats" are clean.
nodetool compactionstats pend
check nodetool tpstats and looking for AntiEntropySessions/AntiEntropyStages
grep the log and looking for "repair" and "merkle tree"
- Original Message -
From: "S C"
To: user@cassandra.apache.org
Sent: Monday, March 25, 2013 2:55:30 PM
Subject: nodetool repair hung?
I am using Cassandra
Hi,
I created a table with the following structure in cqlsh (Cassandra
1.2.3 - cql 3):
CREATE TABLE mytable ( column1 text,
column2 text,
messageId timeuuid,
message blob,
PRIMARY KEY ((column1, column2), messageId));
I can quite happily add values to this table. e.g:
in
Thank you. It helped me.
> Date: Mon, 25 Mar 2013 15:22:32 -0700
> From: wz1...@yahoo.com
> Subject: Re: nodetool repair hung?
> To: user@cassandra.apache.org
>
> check nodetool tpstats and looking for AntiEntropySessions/AntiEntropyStages
> grep the log and looking for "repair" and "merkle tree"
I've actually tried all or 1. Anyway I think I've solved the issue. Seems like
node1 is having some issues with regards to connections.
Thanks!
--
Byron Wang
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Monday, March 25, 2013 at 9:11 PM, Víctor Hugo Oliveira Molinar wrote:
Hi,
I'm currently trying to implement an offline message retrieval solution wherein
I retrieve messages after a particular timestamp for specific users. My
question is will what route should I go for…multple primary keys on an IN
clause or using 2i
The current model of the messages table lo
Hi,
I am experiencing this bug on our 1.1.6 cluster:
https://issues.apache.org/jira/browse/CASSANDRA-4765
The pending compactions has been stuck on a constant value, so I suppose
something is not compacting due to this. Is there a workaround beside
upgrading? We are not ready to upgrade just yet
36 matches
Mail list logo