Hello, you could also probably do it in your application? Just sample with
an interval of time and that should give some indication of throughput.
HTH
/Jason
On Thu, Dec 19, 2013 at 12:11 AM, Krishna Chaitanya
wrote:
> Hello,
>
> Could you please suggest to me the best way to measure write-thr
We are using Cassandra 2.0.3-1 installed on Ubuntu 12.04 from the
DataStax repo with the DataStax Java driver version 2.0.0-rc1. Every
now and then we get the following exception:
2013-12-19 06:56:34,619 [sql-2-t15] ERROR core.RequestHandler -
Unexpected error while querying /x.x.x.x
java.lang.Nu
I'd suggest setting some cassandra jvm parameters so that you can analyze a
heap dump and peek through the gc logs. That'll give you some clues e.g.
if the memory problem is growing steadily or suddenly, and clues from a
peek at which object are using the memory.
-XX:+HeapDumpOnOutOfMemoryError
Hi,
We are facing with a problem on Cassandra tuning. In that we have faced
with following OOM scenario[1], after running the system for 6 days. We
have tuned the cassandra with following values. These values also obtained
by going through huge number of testing cycles. But still it has gone OOM.
On Wed, Dec 18, 2013 at 1:28 PM, Kumar Ranjan wrote:
> Second approach ( I used in production ):
> - fetch all super columns for a row key
>
Stock response mentioning that super columns are anti-advised for use,
especially in brand new code.
=Rob
I am using pycassa. So, here is how I solved this issue. Will discuss 2
approaches. First approach didn't work out for me. Thanks Aaron for your
attention.
First approach:
- Say if column_count = 10
- collect first 11 rows, sort first 10, send it to user (front end) as JSON
object and last=11th_co
Hi Rob, thanks for the refresher, and the the issue link (fixed today too-
thanks Sylvain!).
Cheers,
Lee
On Wed, Dec 18, 2013 at 10:47 AM, Robert Coli wrote:
> On Wed, Dec 18, 2013 at 9:26 AM, Lee Mighdoll wrote:
>
>> What's the current cassandra 2.0 advice on sizing for wide storage engine
>
Thanks for the pointer Alain.
At a quick glance, it looks like people are looking for query time
filtering/aggregation, which will suffice for small data sets. Hopefully we
might be able to extend that to perform pre-computations as well. (which
would support much larger data sets / volumes)
I¹
Thanks Julien. We ran repair. Increasing the RF should not make sstables
obselete. I can understand reducing RF or adding new node etc can result in
few obsolete sstables which eventually go away after you run cleanup.
On Wed, Dec 18, 2013 at 1:49 AM, Julien Campan wrote:
> Hi,
> When you are in
Thanks Aaron. No tmp files and not even a single exception in the
system.log.
If the file was last modified on 20-Nov then there must be an entry for
that in the log (either completed streaming or compacted).
On Tue, Dec 17, 2013 at 7:23 PM, Aaron Morton wrote:
> -tmp- files will sit in the dat
On Wed, Dec 18, 2013 at 9:26 AM, Lee Mighdoll wrote:
> What's the current cassandra 2.0 advice on sizing for wide storage engine
> rows? Can we drop the added complexity of managing day/hour partitioning
> for time series stores?
>
"A few hundred megs" at very most is generally
recommended. in_
On Wed, Dec 18, 2013 at 2:44 AM, Sylvain Lebresne wrote:
> As Janne said, you could still have hint being written by other nodes if
> the one storage node is dead, but you can use the system
> property cassandra.maxHintTTL to 0 to disable hints.
>
If one uses a Token Aware client with RF=1, that
I think the recommendation once upon a time was to keep wide storage engine
internal rows from growing too large. e.g. for time series, it was
recommended to partition samples by day or by hour to keep the size
manageable.
What's the current cassandra 2.0 advice on sizing for wide storage engine
Hello,
Could you please suggest to me the best way to measure write-throughput in
Cassandra. I basically have an application that stores network packets to a
Cassandra cluster. Which is the best way to measure write performance,
especially write-throughput, in terms of number of packets stored int
On Wed, Dec 18, 2013 at 8:24 AM, Christopher Wirt wrote:
> What, if any, is the difference between selecting writetime(column) and
> just looking at the Timestamp of a selected column.
There's no difference. The writetime() function is only really necessary
for native protocol drivers, which don
Is there any reason to use the WRITETIME function on non-counter columns?
I'm using CQL statements via the thrift protocol and get a Timestamp
returned with each column.
I'm pretty sure select a, writetime(a) from b where u = 1 is unnecessary for
me. Unless a is a counter.
I guess my re
> -Original Message-
> From: Sylvain Lebresne [mailto:sylv...@datastax.com]
> Sent: 18 December 2013 12:46
> Google up NetworkTopologyStrategy. This is what you want to use and it's
> not configured in cassandra.yaml but when you create the keyspace.
>
> Basically, you define your topology
Hi,
I was wondering if anybody knows any best practices of how to apply a
schema migration across a cluster.
I've been reading this article:
http://www.datastax.com/dev/blog/the-schema-management-renaissance
to see what is happening under the covers. However the article doesn't
seem to talk abo
this has been fixed:
https://issues.apache.org/jira/browse/CASSANDRA-6496
On Wed, Dec 18, 2013 at 2:51 PM, Desimpel, Ignace <
ignace.desim...@nuance.com> wrote:
> Hi,
> Would it not be possible that in some rare cases these 'small' files are
> created also and thus resulting in the same endless
Hi,
Would it not be possible that in some rare cases these 'small' files are
created also and thus resulting in the same endless loop behavior? Like a storm
on the server make the memtables flushing. When the storm lies down, the
compaction then would have the same problem?
Regards,
Ignace
--
I did the test again to get the log information.
There is a "Drop keyspace" message at the time I drop the keyspace. That
actually must be working since after the drop, I do not get any records back.
But starting from the time of restart, I do not get any "Drop keyspace" message
in the log.
I
Google up NetworkTopologyStrategy. This is what you want to use and it's
not configured in cassandra.yaml but when you create the keyspace.
Basically, you define your topology in cassandra-topology.yaml (where you
basically manually set which node is in which DC, which you can really just
see as a
I figured it out. Another process on that machine was leaking threads.
All is well!
Thanks guys!
Oleg
On 2013-12-16 13:48:39 +, Maciej Miklas said:
the cassandra-env.sh has option
JVM_OPTS="$JVM_OPTS -Xss180k"
it will give this error if you start cassandra with java 7. So increase
the
> -Original Message-
> From: Sylvain Lebresne [mailto:sylv...@datastax.com]
> Sent: 18 December 2013 10:45
>
> You seem to be well aware that you're not looking at using Cassandra for
> what it is designed for (which obviously imply you'll need to expect under-
> optimal behavior), so I'm
On 17 December 2013 19:47, Robert Coli wrote:
> I would comment to that effect on CASSANDRA-6210, were I you.
Will do.
> Are you using vnodes? Have you tried a rolling restart of all nodes?
Yes, we're using vnodes, and all nodes have been restarted since this
problem started happening.
--
Rus
You seem to be well aware that you're not looking at using Cassandra for
what it is designed for (which obviously imply you'll need to expect
under-optimal behavior), so I'm not going to insist on it.
As to how you could achieve that, a relatively simple solution (that do not
require writing your
> -Original Message-
> From: Janne Jalkanen [mailto:janne.jalka...@ecyrd.com]
>
> Essentially you want to turn off all the features which make Cassandra a
> robust product ;-).
Oh, I don't want to, but sadly those are the requirements that I have to work
with.
Again, the context is usin
This may be hard because the coordinator could store hinted handoff (HH) data
on disk. You could turn HH off and have RF=1 to keep data on a single instance,
but you would be likely to lose data if you had any problems with your
instances… Also you would need to tweak the memtable flushing so t
Hi, this would indeed be much appreciated by a lot of people.
There is this issue, existing about this subject
https://issues.apache.org/jira/browse/CASSANDRA-4914
Maybe could you help commiters there.
Hope this will be usefull to you.
Please let us know when you find a way to do these operat
Hi,
When you are increasing the RF, you need to perform repair for the keyspace
on each node.(Because datas are not automaticaly streamed).
After that you should perform a cleanup on each node to remove obsolete
sstable.
Good luck :)
Julien Campan.
2013/12/18 Aaron Morton
> -tmp- file
Ahoy the list. I am evaluating Cassandra in the context of using it as a
storage back end for the Titan graph database.
We’ll have several nodes in the cluster. However, one of our requirements is
that data has to be loaded into and stored on a specific node and only on that
node. Also, it c
31 matches
Mail list logo