The issue below could result in abandoned threads under high contention, so
we'll get that fixed.
But we are not sure how/why it would be called so many times. If you could
provide a full list of threads and the output from nodetool gossipinfo that
would help.
Cheers
-
Aaron
Hi Aaron,
thanks for the response.
Permissions are correct : owner is cassandra (ubuntu) and permissions
are drwxr-xr-x
When I created the schema, the KS were created as directory in the
.../data/ directory
When I use cassandra-cli, set the CL to QUORUM, ensure two instances are up
(nodetool ring
Double check the file permissions ?
Write some data (using cqlsh or cassandra-cli) and flush to make sure the new
files are created where you expect them to be.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 1/05
> Many many many of the threads are trying to talk to IPs that aren't in the
> cluster (I assume they are the IP's of dead hosts).
Are these IP's from before the upgrade ? Are they IP's you expect to see ?
Cross reference them with the output from nodetool gossipinfo to see why the
node think
> I thought that each datacenter has 100% coverage of token range. What does
> the value in "Owns" field mean and how it affects a replication (for exaple,
> with replication factors DC1:1, DC2:2)?
Run the command and specify your keyspace, that will tell nodetool to use the
Replication Strategy
> These are taken just before starting shuffle (ran repair/cleanup the day
> before).
> During shuffle disabled all reads/writes to the cluster.
>
> nodetool status keyspace:
>
> Load Tokens Owns (effective) Host ID
> 80.95 GB 256 16.7% 754f9f4c-4ba7-4495-97e7-1f5b6755c
we apologize if you receive multiple copies of this message
===
CALL FOR PAPERS
2013 Workshop on
Middleware for HPC and Big Data Systems
MHPC '13
as part of Euro-Par 2013, Aachen, Germany
=
Thanks guys,both are good pointers
Regards
Chiddu
On Tue, Apr 30, 2013 at 7:09 PM, Brian O'Neill wrote:
>
> You could always do something like this as well:
>
> http://brianoneill.blogspot.com/2012/05/dumping-data-from-cassandra-like.html
>
> -brian
>
> ---
>
> Brian O'Neill
>
> Lead Architect,
We've also had issues with ephemeral drives in a single AZ in us-east-1, so
much so that we no longer use that AZ. Though our issues tended to be obvious
from instance boot - they wouldn't suddenly degrade.
On Apr 28, 2013, at 2:27 PM, Alex Major wrote:
> Hi Mike,
>
> We had issues with the ep
Check the logs for warnings from the GCInspector.
If you see messages that correlate with compaction running limit compaction to
help stabilise things…
* set concurrrent_compactions to 2
* if you have wide rows reduce in_memory_compaction_limit
* reduce compaction_throughput
If you have a lot (
> ava.lang.RuntimeException: UnavailableException()
Looks like the pig script could talk to one node, but the coordinator could not
process the request at the consistency level requested. Check all the nodes are
up, that the RF is set to the correct value and the CL you are using.
Cheers
Hi,
I have troubles finding some quantitative information as to how a healthy
Cassandra node should look like (CPU usage, number of flushes,SSTables,
compactions, GC), given a certain hardware spec and read/write load. I have
troubles gauging our first and only Cassandra node, whether it needs
Hello,
I'm trying to bring up a copy of an existing 3-node cluster running 1.0.8
into a 3-node cluster running 1.0.11.
The new cluster has been configured to have the same tokens and the same
partitioner.
Initially, I copied the files in the data directory of each node into their
corresponding no
I use phpcassa.
I did a thread dump. 99% of the threads look very similar (I'm using 1.1.9
in terms of matching source lines). The thread names are all like this:
"WRITE-/10.x.y.z". There are a LOT of duplicates (in terms of the same
IP). Many many many of the threads are trying to talk to IPs
You could always do something like this as well:
http://brianoneill.blogspot.com/2012/05/dumping-data-from-cassandra-like.htm
l
-brian
---
Brian O'Neill
Lead Architect, Software Development
Health Market Science
The Science of Better Results
2700 Horizon Drive King of Prussia, PA 19406
M: 21
Try sstable2json and json2sstable. But it works on column family so you can
fetch all column family and iterate over list of CF and use sstable2json
tool to extract data. Remember this will only fetch on disk data do
anything in memtable/cache which is to be flushed will be missed. So run
compactio
Is there any easy way of exporting all data for a keyspace (and conversely)
importing it.
Regards
Chiddu
Hello.
I have set up test cluster of 2 DCs with 1 node in each DC. In each config
I specified 256 virtual nodes and chosed GossippingPropertyFileSnitch.
For node1:
~/Cassandra$ cat /etc/cassandra/cassandra-rackdc.properties
dc=DC1
rack=RAC1
For node2:
~/Cassandra$ cat /etc/cassandra/cassandra-ra
Oh.my bad. Thanks mate, that worked.
On Apr 29, 2013 10:03 PM, wrote:
> For starters: If you are using the Murmur3 partitioner, which is the
> default in cassandra.yaml, then you need to calculate the tokens using:***
> *
>
> python -c 'print [str(((2**64 / 2) * i) - 2**63) for i in range(2)]'**
19 matches
Mail list logo