About repairs,
we encountered a similar problem with our setup where repairs would take
ages to complete. Based on your setup you can try loading data into page
cache before running repairs. Depending on how much data you can hold in
cache, this will speed up your repairs massively.
-- artur
If you have a test environment which has identical or almost identical to
your production, doing upgrade in test environment would give more
confidence. If it is *production*, I would not even want to keep my finger
cross or hope in anyway it would work for this major upgrade 1.0.x to 2.0.x
. A saf
SSTable count: 365
Your sstable counts are too many... don't know what is the best count
should be but for my experience, anything below 20 are good. Is your
compaction running?
I read on a few blog on how should we read cfhistograms, but never really
understood fully. Anyone care to explain usi
NoClassDefFoundError: org/apache/cassandra/service/CassandraDaemon
This stated very clear, the class is not found in the classpath. Very
obviously you are not using the cassandra package for the distribution, so
you need to find which jar that contain this class and check in your
classpath if this
Hello,
We've been having problems with long GC pauses and can't seem to get rid of
them.
Our latest test is on a clean machine with Ubuntu 12.04 LTS, Java 1.7.0_45
and JNA installed.
It is a single node cluster with most settings being default, the only
things changed are ip-addresses, cluster na
The repo:
https://github.com/edwardcapriolo/farsandra
The code:
Farsandra fs = new Farsandra();
fs.withVersion("2.0.4");
fs.withCleanInstanceOnStart(true);
fs.withInstanceName("1");
fs.withCreateConfigurationFiles(true);
fs.withHost("localhost");
fs.withSeeds(Arrays.asLi
Hello,
You can implement relations in couple of ways, JSON/XML and CQL collection
Classes.
Thanks
Chandra
On Tue, Jan 21, 2014 at 8:58 PM, Les Hartzman wrote:
> True. Fortunately though in this application, the data is
> write-once/read-many. So that is one bullet I would dodge!
>
> Les
>
>
Looks promising, thanks for the effort! :-)
I suggest you to add this description and a simple "getting started"
section with the above example to the project README, making it easier for
other to use your project.
Cheers,
2014/1/22 Edward Capriolo
> The repo:
> https://github.com/edwardcapri
Hello,
I have a little question, i used to use OPSCENTER TARBALL distribution
without any licence: it works.
But when i try to install OPSCENTER ENTREPRISE EDITION to take advantages
of backup and restore function it doesn't work. The installation is ok, but
when i try to reach the homepage O
I don't recommend PrintFLSStatistics=1, it makes the gc logs hard to
mechanically parse. Because of that, I can't easily tell whether you're in
the same situation we found. But just in case, try setting
+CMSClassUnloadingEnabled. There's an issue related to JMX in DSE that
prevents effective old
Hi!
I'm a little worried about the data model I have come up with for handling
highscores.
I have a lot of users. Each user has a number of friends. I need a
highscore list pr friend list.
I would like to have it optimized for reading the highscores as opposed to
setting a new highscore as the u
Read users score, increment, update friends list, update user with new high
score
Would that work?
--
Colin
+1 320 221 9531
> On Jan 22, 2014, at 11:44 AM, Kasper Middelboe Petersen
> wrote:
>
> Hi!
>
> I'm a little worried about the data model I have come up with for handling
> highsc
Hi All,
I am using Apache Cassandra 2.0.4 version with cassandra-jdbc-1.2.5.jar.I am
trying to run sample java program. aim getting below error. Please correct me
weather i am using right Jdbc driver or suggest me supported jdbc driver.
log4j:WARN No appenders could be found for logger
(or
Is the jar on the path? Is cassandra home set correctly?
Looks like cassandra cant find the jar-verify existence by searching.
--
Colin
+1 320 221 9531
On Jan 22, 2014, at 11:50 AM, Chiranjeevi Ravilla
wrote:
Hi All,
I am using Apache Cassandra 2.0.4 version with cassandra-jdbc-1.2.5.jar
On Wed, Jan 22, 2014 at 06:44:20PM +0100, Kasper Middelboe Petersen wrote:
>I'm a little worried about the data model I have come up with for handling
>highscores.
>I have a lot of users. Each user has a number of friends. I need a
>highscore list pr friend list.
>I would like t
I can think of two cases where something bad would happen in this case:
1. Something bad happens after the increment but before some or all of the
update friend list is finished
2. Someone spams two scores at the same time creating a race condition
where one of them could have a score that is not y
It is a tricky type of problem because some ways of doing it involve
iterative scans.
This presentation discusses a solution for top-k:
http://www.slideshare.net/planetcassandra/jonathan-halliday
On Wed, Jan 22, 2014 at 12:48 PM, Colin wrote:
> Read users score, increment, update friends list,
How many users and how many games?
--
Colin
+1 320 221 9531
On Jan 22, 2014, at 10:59 AM, Kasper Middelboe Petersen <
kas...@sybogames.com> wrote:
I can think of two cases where something bad would happen in this case:
1. Something bad happens after the increment but before some or all of the
Yes friendship is symmetrical.
This could work for my problem right now, but I'm afraid it would just be
postponing the problem slightly until something like big tournaments (which
are coming) raises the same problem again.
On Wed, Jan 22, 2014 at 6:58 PM, Jon Ribbens <
jon-cassan...@unequivocal
Many million users. Just the one game- I might have some different scores I
need to keep track of, but I very much hope to be able to use the same
approach for those as for the high score mentioned here.
On Wed, Jan 22, 2014 at 7:08 PM, Colin Clark wrote:
> How many users and how many games?
>
One way might be to use userid as a rowid, and then put all of the friends with
their scores on the same row. You could even update the column entry like this
Score:username or Id
This way the columns would come back sorted when reading the high scores for
the group.
To update set that uses s
Hmm. Hadn't thought about using a collection. Might be able to get away
with a map. Have to find out more about the origins of these relationships.
I don't think XML gives any advantage over JSON, but it is another
possibility.
Les
On Wed, Jan 22, 2014 at 7:43 AM, chandra Varahala <
hadoopandca
I have a table with a bunch of records that have 10,000 keys per partition
key (not sure if that¹s the right terminology). Here¹s the schema:
CREATE TABLE bdn_index_pub (
tshard VARCHAR,
pord INT,
ord INT,
hpath VARCHAR,
page BIGINT,
PRIMARY KEY (tshard, pord)
) WITH gc_grace_seconds = 0;
Did you put these jars in classpath ?
cassandra-all-1.x.x.jar
guarva
jackson-core-asl
jacckson-mapper-asl
libthrift
snappy
slf4j-api
metrics-core
netty
thanks
Chandra
On Wed, Jan 22, 2014 at 12:52 PM, Colin Clark wrote:
> Is the jar on the path? Is cassandra home set correctly?
>
> Looks li
I thought PrintFLSStatistics was necessary for determining heap
fragmentation? Or is it possible to see that without it as well?
Perm-Gen stays steady, but I'll enable it anyway to see if it has any affect.
Thanks,
John
On Wed, Jan 22, 2014 at 8:34 AM, Lee Mighdoll wrote:
> I don't recommend P
LCS does create a lot of SSTables unfortunately. The nodes are keeping
up on compactions though.
This started after starting to read from a CF that has tombstones in its rows.
What's even more concerning, is it's continuing even after stopping
reads and dropping that CF.
On Wed, Jan 22, 2014 at
Hi,
Can you share the GC logs for the systems you are running problems into?
Yogi
On Wed, Jan 22, 2014 at 6:50 AM, Joel Samuelsson
wrote:
> Hello,
>
> We've been having problems with long GC pauses and can't seem to get rid
> of them.
>
> Our latest test is on a clean machine with Ubuntu 12.04
I think you are building from the source, were there any build failures,
before you started the test?
can you rebuild and try again? Please provide the command line you are
using as well?
On Wed, Jan 22, 2014 at 3:06 AM, Jason Wee wrote:
> NoClassDefFoundError: org/apache/cassandra/service/C
I was wondering how important to have a cluster that has a node with
a token that begin with a zero for a three node cluster?
3 NODES
---
0 <-- Node 1
56713727820156410577229101238628035242 <-- Node 2
113427455640312821154458202477256070484
No. There is no any special in value 0.
On Wed, Jan 22, 2014 at 1:30 PM, Daniel Curry wrote:
> I was wondering how important to have a cluster that has a node with a
> token that begin with a zero for a three node cluster?
>
>
>
> 3 NODES
> ---
>0 <-- No
On Wed, Jan 22, 2014 at 11:35 AM, John Watson wrote:
> I thought PrintFLSStatistics was necessary for determining heap
> fragmentation? Or is it possible to see that without it as well?
>
I've found that easier parsing is more important than tracking indicators
of fragmentation.
Perm-Gen stays
Trying to find out why a cassandra read is taking so long, I used tracing and
limited the number of rows. Strangely, when I query 600 rows, I get results in
~50 milliseconds. But 610 rows takes nearly 1 second!
cqlsh> select containerdefinitionid from containerdefinition limit 600;
... lots of o
Hi,
On C* 2.0.0. 3 Node cluster.
I have a column daycount list. The column is storing a count.
Every few secs a new count is appended. The total count for the day is the
sum of all items in the list.
My application logs indicate I wrote about 11 items to the column for a
particular row. As
Yes, I¹ve experienced this as well. It looks like you¹re getting the number
of items inserted mod 64K.
From: Manoj Khangaonkar
Reply-To:
Date: Wednesday, January 22, 2014 at 7:17 PM
To:
Subject: Any Limits on number of items in a collection column type
Hi,
On C* 2.0.0. 3 Node cluster.
I
Nice work, Ed. Personally, I do find it more productive to write
system tests in Python (dtest builds on ccm to provide a number of
utilities that cut down on the bolierplate [1]), but I can understand
that others will feel differently and more testing can only improve
Cassandra.
Thanks!
[1] htt
I didn¹t read your question properly. Collections are limited to 64K items,
not 64K bytes per item.
From: Manoj Khangaonkar
Reply-To:
Date: Wednesday, January 22, 2014 at 7:17 PM
To:
Subject: Any Limits on number of items in a collection column type
Hi,
On C* 2.0.0. 3 Node cluster.
I ha
Thanks. I guess I can work around by maintaining hour_counts (which will
have fewer items) and adding the hour counts to
get day counts.
regards
On Wed, Jan 22, 2014 at 7:15 PM, Robert Wille wrote:
> I didn’t read your question properly. Collections are limited to 64K
> items, not 64K bytes pe
Hi all,
I downloaded 1.2.13 version and ran ./cqlsh inside bin folder, but it says
that "bash: ./cqlsh: Permission denied", when I ran it with sudo it says
"Command not found".
When I ran chmod u+x cqlsh and then tried ./cqlsh, now it says that "Can't
locate transport factory function
cqlshlib.tfa
Right,
This does not have to be thought of as a replacement for ccm or dtest.
The particular problems I tend to have are:
When trying to do Hive and Cassandra storage handler, Cassandra and Hive
had incompatible versions of antlr. Short of rebuilding one or both it can
not be resolved.
I have
Just meant it cannot find the require library, why don't you install
cassandra package to your distribution ?
http://rpm.datastax.com/community/noarch/cassandra12-1.2.13-1.noarch.rpm
http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/install/installRHEL_t.html?pagename
Here is one example. 12GB data, no load besides OpsCenter and perhaps 1-2
requests per minute.
INFO [ScheduledTasks:1] 2013-12-29 01:03:25,381 GCInspector.java (line 119)
GC for ParNew: 426400 ms for 1 collections, 2253360864 used; max is
4114612224
2014/1/22 Yogi Nerella
> Hi,
>
> Can you shar
Hello list,
I was if anyone has any pointers or some advise regarding using row cache
vs leaving it up to the OS buffer cache.
I run cassandra 1.1 and 1.2 with JNA, so off-heap row cache is an option.
Any input appreciated.
Katriel
42 matches
Mail list logo