AFAIk there is nothing that can do that.
It would be possible to add an MBean to show the config as it was loaded. But
some config values can be changed in a running system and not all of these are
reflected back into the central config. So it would not be accurate.
When the file is loaded the
What are the column names you are getting back and the the byte values you are
using in the start and from.
My guess is it's a serialization thing, try using an IntegerType in cassandra
and have your client serialise the ticks long for you. If that works then work
back to see whats going on.
You data load is fine.
It sounds like you will run into issues with the data model and functionality
of cassandra. "Standard Analysis" in the RDBMS sense of throwing any ad-hoc
query at the data and letting the query engine work it out is not possible
without using HIVE/PIG or some other query
BytesType sorts values in byte order. That is: "2" (byte 50) is bigger than
"10" (byte 49 48). It may or may not be relevant to your problem, depending on
your column names and the inputs.
James
From: Gustavo Gustavo
To: user@cassandra.apache.org
Sent:
Hi All,
I'm currently running 1.0.7, and just noticed a small error in the comment
section.
# Frame size for thrift (maximum field length).
# 0 disables TFramedTransport in favor of TSocket. This option
# is deprecated; we strongly recommend using Framed mode.
thrift_framed_transport_size_in_mb
Hi,
I have a 1.0.3 cluster with 4 nodes and replication factor 3.
I have lots of messages like the following in my logs:
INFO [ScheduledTasks:1] 2012-01-25 07:07:18,410 MessagingService.java (line
613) 510 MUTATION messages dropped in last 5000ms
INFO [ScheduledTasks:1] 2012-01-25 07:07:23,420
Not that familiar with CQL in particular, but what timeout is set in
pycassa? It could be too low for your batch size. If your request is timing
out, it will do exponential back off between retries.
On Jan 25, 2012 2:53 AM, "aaron morton" wrote:
> There are few slight differences in the execution
Hello Community,
Am troubleshooting an issue with sudden and sustained high CPU on nodes in a 3
node cluster. This takes place when there is minimal load on the servers, and
continues indefinitely until I stop and restart a node. All nodes (3) seem to
be effected by the same issue, however it
Hi there,
I'm currently running a 2-node cluster for some small projects that might
need to scale-up in the future: that's why we chose Cassandra. The actual
problem is that one of the node's harddrive usage keeps growing.
For example:
- after a fresh restart ~ 10GB
- after a couple of days runni
Hello.
What's in the logs? It should output something like "Hey, you've got
most of your memory used. I am going to flush some of memtables". Sorry,
I don't remember exact spelling, but it's gong from GC, so it should be
greppable by "GC".
25.01.12 16:26, Matthew Trinneer написав(ла):
Hello
Here is a snippet of what I'm getting out of system.log for GC. Anything is
there provide a clue?
WARN [ScheduledTasks:1] 2012-01-22 12:53:42,804 GCInspector.java (line 146)
Heap is 0.7767292149986439 full. You may need to reduce memtable and/or cache
sizes. Cassandra will now flush up to t
According to the log, I don't see much time spent for GC. You can still
check it with jstat or uncomment GC logging in cassandra-env.sh. Are you
sure you've identified the thread correctly?
It's still possible that you have memory spike where GCInspector simply
has no chance to run between Full
> Am I missing data here?
Yes, but you can repair it with nodetool.
> Is this means that my cluster is too loaded?
yes.
http://spyced.blogspot.com/2010/01/linux-performance-basics.html)
Use nodetool tpstats to see when tasks are backing up, check io throughput with
iostat (see
Cheers
---
On 01/25/12 16:09, R. Verlangen wrote:
Hi there,
I'm currently running a 2-node cluster for some small projects that
might need to scale-up in the future: that's why we chose Cassandra.
The actual problem is that one of the node's harddrive usage keeps
growing.
For example:
- after a fresh
I also do repair, compact and cleanup every couple of days, and also
have daily restarts on
crontab. It doesn't hurt and I avoid having a node becoming unresponsive
after many days
of operation, that has happened before. Older files get cleaned up on
restart.
It doesn't take long to shut down
Ok thank you for your feedback. I'll add these tasks to our daily cassandra
maintenance cronjob. Hopefully this will keep things under controll.
2012/1/25 Karl Hiramoto
> On 01/25/12 16:09, R. Verlangen wrote:
>
>> Hi there,
>>
>> I'm currently running a 2-node cluster for some small projects th
On 01/25/12 19:18, R. Verlangen wrote:
Ok thank you for your feedback. I'll add these tasks to our daily
cassandra maintenance cronjob. Hopefully this will keep things under
controll.
I forgot to mention that we found that Forcing a GC also cleans up some
space.
in a cronjob you can do th
Thanks for reminding. I'm going to start with adding the cleanup & compact
to the chain of maintenance tasks. In my opinion java should determine
itselfs when to start a GC: doesn't feel natural to do this manually.
2012/1/25 Karl Hiramoto
>
> On 01/25/12 19:18, R. Verlangen wrote:
>
>> Ok thank
Karl,
Can you give a little more details on these 2 lines, what do they do?
java -jar cmdline-jmxclient-0.10.3.jar - localhost:8080
java.lang:type=Memory gc
Thank you,
Mike
-Original Message-
From: Karl Hiramoto [mailto:k...@hiramoto.org]
Sent: Wednesday, January 25, 2012 12:26 PM
To
In his message he explains that it's for " Forcing a GC ". GC stands for
garbage collection. For some more background see:
http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
Cheers!
2012/1/25
> Karl,
>
> Can you give a little more details on these 2 lines, what do they do?
>
> j
Aaron,
- Version of Cassandra is 1.0.7, python cql is 1.0.7
- I AM shutting down the server and deleting the /var/lib/cassandra
directory the starting it back up between tests
- nodetool cfstats always looks like this: http://pastebin.com/95EAkZK5 with
only the MutationStage Complet
Yup, unframed transport was removed in 0.8.beta 2.
I'll fix the comments, thanks.
Aaron
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 25/01/2012, at 10:27 PM, Jools Enticknap wrote:
>
> Hi All,
>
> I'm currently running 1.0.7, and just notic
You are running into GC issues.
>> WARN [ScheduledTasks:1] 2012-01-22 12:53:42,804 GCInspector.java (line 146)
>> Heap is 0.7767292149986439 full. You may need to reduce memtable and/or
>> cache sizes. Cassandra will now flush up to the two largest memtables to
>> free up memory. Adjust flu
There is someone wrong with the way a composite type value was serialized. The
length of a part on disk is not right.
As a work around remove the log file, restart and then repair the node.
How it got like that is another question. What was the schema change ?
Cheers
-
Aar
That disk usage pattern is to be expected in pre 1.0 versions. Disk usage is
far less interesting than disk free space, if it's using 60 GB and there is
200GB thats ok. If it's using 60Gb and there is 6MB free thats a problem.
In pre 1.0 the compacted files are deleted on disk by waiting for the
How stable is 1.0 these days? We're on 0.8.6, and the early 1.0.x versions
made me nervous - too many changes that were fixing regressions in previous
1.0.x versions. But the pace of 1.0.x releases has slowed notably, so I'm
wondering: is it safe now? (And: was I overreacting before, or was I
corre
+ I would also check the GC settings :) and full gc events in the logs.
Regards,
On Wed, Jan 25, 2012 at 9:52 AM, aaron morton wrote:
> Am I missing data here?
>
> Yes, but you can repair it with nodetool.
>
> Is this means that my cluster is too loaded?
>
> yes.
>
> http://spyced.
i don't get it. Suppose i have a data model and i have million of rows and
suppose i want perform some select and some insert , it is not feasible to use
cassandra for those reasons?
--
francesco.tangari@gmail.com
Inviato con Sparrow (http://www.sparrowmailapp.com/?sig)
Il giorno mercole
There are two relevant bugs (that I know of), both resolved in somewhat
recent versions, which make somewhat regular restarts beneficial
https://issues.apache.org/jira/browse/CASSANDRA-2868 (memory leak in
GCInspector, fixed in 0.7.9/0.8.5)
https://issues.apache.org/jira/browse/CASSANDRA-2252 (hea
Dne 26.1.2012 2:32, David Carlton napsal(a):
How stable is 1.0 these days?
good. but hector 1.0 is unstable.
I want to do ranged row queries for a few of my column families, but
best practice seems to be to use the random partitioner. Splitting my
column families between two clusters (one random, one ordered) seems
like a pretty expensive compromise.
Instead, I'm thinking of using the order-preservin
32 matches
Mail list logo