Re: Cassandra 2.0 - AssertionError in ArrayBackedSortedColumns

2013-07-22 Thread Sylvain Lebresne
This is a bug really: https://issues.apache.org/jira/browse/CASSANDRA-5786.

This should get fixed in the next beta of 2.0, but if you really want to
test CAS updates in the meantime, you'll have to provide the columns in
(column family comparator) sorted order to the thrift cas() method.

--
Sylvain


On Sun, Jul 21, 2013 at 1:22 PM, Soumava Ghosh wrote:

> Hi,
>
> I'm taking a look at the Check and Set functionalities provided by the
> cas() API provided by cassandra 2.0 (the code available on git). I'm
> running a few tests on a small sized cluster (replication factor 3,
> consistency level quorum) with a few clients. I've observed a lot of cases
> seem to hit the TimedOutException while paxos is in progress. On inspection
> of the server side logs the following stack was seen:
>
> ERROR [Thread-582] 2013-07-21 01:17:46,169 CassandraDaemon.java (line 196)
> Exception in thread Thread[Thread-582,5,main]
> java.lang.AssertionError: Added column does not sort as the last column
> at
> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
> at
> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:119)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:96)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:91)
> at
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:139)
> at
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:128)
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
> at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:175)
> at
> org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
> at
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
>
> I was updating multiple columns using the same cas() call, and looking at
> the assert (code below) I added code to sort them before sending.
> However, now I'm seeing the same issue *even with single column* cas()
> calls.
>
> int c = internalComparator().compare(columns.get(getColumnCount()
> - 1).name(), column.name());
> // note that we want an assertion here (see addColumn javadoc),
> but we also want that if
> // assertion are disabled, addColumn works correctly with unsorted
> input
> assert c <= 0 : "Added column does not sort as the " + (reversed ?
> "first" : "last") + " column";
>
> I have also seen this error occur in the getRow() path, as below:
>
> java.lang.AssertionError: Added column does not sort as the last column
> at
> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
> at
> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
> at
> org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:151)
> at
> org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:95)
> at
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:57)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1452)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1281)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1195)
> at org.apache.cassandra.db.Table.getRow(Table.java:331)
> at
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
> at
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1288)
> at
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1813)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
>
> Could someone provide me a little more clarity about what the context of
> the error is and what the client needs to do to fix it? I understand that
> running with assertions disabled would do away with this error, but i'd
> like to know if there is a proper way to fix this. Moreover, what could
> possibly be going wrong in the get path that is hitting this issue?
>
> Some guidance, on the fact that whether it is expected that the client
> sort the columns before calling the API with multiple columns, would be
> really appreciated.
>
> Thanks,
> Soumava
>


Re: Cassandra 2.0 - AssertionError in ArrayBackedSortedColumns

2013-07-22 Thread Soumava Ghosh
Thanks for the reply Sylvain! A couple of follow-up questions:

i. The second stack in my mail originated at a getRow() call. What could be
the cause of that? I am assuming data retrieval should not cause any
issues, unless data was stored in incorrect order and I don't know if that
is possible through the API..

ii. I am also seeing some sporadic cases where a single column cas() update
is hitting this same issue (the bug you mentioned) as multiple columns, do
you see any reason for that happening? I am yet to investigate this though.

Thank you,
Soumava


On Mon, Jul 22, 2013 at 12:11 AM, Sylvain Lebresne wrote:

> This is a bug really: https://issues.apache.org/jira/browse/CASSANDRA-5786
> .
>
> This should get fixed in the next beta of 2.0, but if you really want to
> test CAS updates in the meantime, you'll have to provide the columns in
> (column family comparator) sorted order to the thrift cas() method.
>
> --
> Sylvain
>
>
> On Sun, Jul 21, 2013 at 1:22 PM, Soumava Ghosh wrote:
>
>> Hi,
>>
>> I'm taking a look at the Check and Set functionalities provided by the
>> cas() API provided by cassandra 2.0 (the code available on git). I'm
>> running a few tests on a small sized cluster (replication factor 3,
>> consistency level quorum) with a few clients. I've observed a lot of cases
>> seem to hit the TimedOutException while paxos is in progress. On inspection
>> of the server side logs the following stack was seen:
>>
>> ERROR [Thread-582] 2013-07-21 01:17:46,169 CassandraDaemon.java (line
>> 196) Exception in thread Thread[Thread-582,5,main]
>> java.lang.AssertionError: Added column does not sort as the last column
>> at
>> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
>> at
>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
>> at
>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:119)
>> at
>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:96)
>> at
>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:91)
>> at
>> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:139)
>> at
>> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:128)
>> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:175)
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
>>
>> I was updating multiple columns using the same cas() call, and looking at
>> the assert (code below) I added code to sort them before sending.
>> However, now I'm seeing the same issue *even with single column* cas()
>> calls.
>>
>> int c = internalComparator().compare(columns.get(getColumnCount()
>> - 1).name(), column.name());
>> // note that we want an assertion here (see addColumn javadoc),
>> but we also want that if
>> // assertion are disabled, addColumn works correctly with
>> unsorted input
>> assert c <= 0 : "Added column does not sort as the " + (reversed
>> ? "first" : "last") + " column";
>>
>> I have also seen this error occur in the getRow() path, as below:
>>
>> java.lang.AssertionError: Added column does not sort as the last column
>> at
>> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
>> at
>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
>> at
>> org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:151)
>> at
>> org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:95)
>> at
>> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:57)
>> at
>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1452)
>> at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1281)
>> at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1195)
>> at org.apache.cassandra.db.Table.getRow(Table.java:331)
>> at
>> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
>> at
>> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1288)
>> at
>> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1813)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at ja

Re: Cassandra 2.0 - AssertionError in ArrayBackedSortedColumns

2013-07-22 Thread Sylvain Lebresne
>  unless data was stored in incorrect order and I don't know if that is
> possible through the API..
>

That's exactly what happens and that's why I say it's a bug.


>
> ii. I am also seeing some sporadic cases where a single column cas()
> update is hitting this same issue (the bug you mentioned) as multiple
> columns, do you see any reason for that happening?
>

That would be slightly weirder, but frankly that's almost surely due to
https://issues.apache.org/jira/browse/CASSANDRA-5786 too.

--
Sylvain


> Thank you,
> Soumava
>
>
> On Mon, Jul 22, 2013 at 12:11 AM, Sylvain Lebresne 
> wrote:
>
>> This is a bug really:
>> https://issues.apache.org/jira/browse/CASSANDRA-5786.
>>
>> This should get fixed in the next beta of 2.0, but if you really want to
>> test CAS updates in the meantime, you'll have to provide the columns in
>> (column family comparator) sorted order to the thrift cas() method.
>>
>> --
>> Sylvain
>>
>>
>> On Sun, Jul 21, 2013 at 1:22 PM, Soumava Ghosh wrote:
>>
>>> Hi,
>>>
>>> I'm taking a look at the Check and Set functionalities provided by the
>>> cas() API provided by cassandra 2.0 (the code available on git). I'm
>>> running a few tests on a small sized cluster (replication factor 3,
>>> consistency level quorum) with a few clients. I've observed a lot of cases
>>> seem to hit the TimedOutException while paxos is in progress. On inspection
>>> of the server side logs the following stack was seen:
>>>
>>> ERROR [Thread-582] 2013-07-21 01:17:46,169 CassandraDaemon.java (line
>>> 196) Exception in thread Thread[Thread-582,5,main]
>>> java.lang.AssertionError: Added column does not sort as the last column
>>> at
>>> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
>>> at
>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
>>> at
>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:119)
>>> at
>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:96)
>>> at
>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:91)
>>> at
>>> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:139)
>>> at
>>> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:128)
>>> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:175)
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
>>>
>>> I was updating multiple columns using the same cas() call, and looking
>>> at the assert (code below) I added code to sort them before sending.
>>> However, now I'm seeing the same issue *even with single column* cas()
>>> calls.
>>>
>>> int c =
>>> internalComparator().compare(columns.get(getColumnCount() - 1).name(),
>>> column.name());
>>> // note that we want an assertion here (see addColumn javadoc),
>>> but we also want that if
>>> // assertion are disabled, addColumn works correctly with
>>> unsorted input
>>> assert c <= 0 : "Added column does not sort as the " + (reversed
>>> ? "first" : "last") + " column";
>>>
>>> I have also seen this error occur in the getRow() path, as below:
>>>
>>> java.lang.AssertionError: Added column does not sort as the last column
>>> at
>>> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
>>> at
>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
>>> at
>>> org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:151)
>>> at
>>> org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:95)
>>> at
>>> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:57)
>>> at
>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1452)
>>> at
>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1281)
>>> at
>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1195)
>>> at org.apache.cassandra.db.Table.getRow(Table.java:331)
>>> at
>>> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
>>> at
>>> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1288)
>>> at
>>> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1813)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> at
>>> java.util.concurrent

Cassandra 2 vs Java 1.6

2013-07-22 Thread Andrew Cobley
I know it was decided to drop the requirement for Java 1.6 for cassandra some 
time ago, but my question is should 
2.0.0-beta1
 run under java 1.6 at all ?  I tried and got the following error:


macaroon:bin administrator$ Exception in thread "main" 
java.lang.UnsupportedClassVersionError: 
org/apache/cassandra/service/CassandraDaemon : Unsupported major.minor version 
51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

macaroon:bin administrator$ java -version
java version "1.6.0_43"
Java(TM) SE Runtime Environment (build 1.6.0_43-b01-447-10M4203)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01-447, mixed mode)
macaroon:bin administrator$

It's fine by me if thats the case !

Andy


The University of Dundee is a registered Scottish Charity, No: SC015096


Re: Cassandra 2 vs Java 1.6

2013-07-22 Thread Michał Michalski
I believe it won't run on 1.6. Java 1.7 is required to compile C* 2.0+ 
and once it's done, you cannot run it using Java 1.6 (this is what 
"Unsupported major.minor version" error tells you about; java version 50 
is 1.6 and 51 is 1.7).


M.

W dniu 22.07.2013 10:06, Andrew Cobley pisze:

I know it was decided to drop the requirement for Java 1.6 for cassandra some time 
ago, but my question is should 
2.0.0-beta1
 run under java 1.6 at all ?  I tried and got the following error:


macaroon:bin administrator$ Exception in thread "main" 
java.lang.UnsupportedClassVersionError: org/apache/cassandra/service/CassandraDaemon : 
Unsupported major.minor version 51.0
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
 at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
 at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

macaroon:bin administrator$ java -version
java version "1.6.0_43"
Java(TM) SE Runtime Environment (build 1.6.0_43-b01-447-10M4203)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01-447, mixed mode)
macaroon:bin administrator$

It's fine by me if thats the case !

Andy


The University of Dundee is a registered Scottish Charity, No: SC015096





Re: Cassandra 2.0 - AssertionError in ArrayBackedSortedColumns

2013-07-22 Thread Sylvain Lebresne
Actually, your second stack trace is due to
https://issues.apache.org/jira/browse/CASSANDRA-5788.


On Mon, Jul 22, 2013 at 9:37 AM, Sylvain Lebresne wrote:

>
>  unless data was stored in incorrect order and I don't know if that is
>> possible through the API..
>>
>
> That's exactly what happens and that's why I say it's a bug.
>
>
>>
>> ii. I am also seeing some sporadic cases where a single column cas()
>> update is hitting this same issue (the bug you mentioned) as multiple
>> columns, do you see any reason for that happening?
>>
>
> That would be slightly weirder, but frankly that's almost surely due to
> https://issues.apache.org/jira/browse/CASSANDRA-5786 too.
>
> --
> Sylvain
>
>
>> Thank you,
>> Soumava
>>
>>
>> On Mon, Jul 22, 2013 at 12:11 AM, Sylvain Lebresne 
>> wrote:
>>
>>> This is a bug really:
>>> https://issues.apache.org/jira/browse/CASSANDRA-5786.
>>>
>>> This should get fixed in the next beta of 2.0, but if you really want to
>>> test CAS updates in the meantime, you'll have to provide the columns in
>>> (column family comparator) sorted order to the thrift cas() method.
>>>
>>> --
>>> Sylvain
>>>
>>>
>>> On Sun, Jul 21, 2013 at 1:22 PM, Soumava Ghosh wrote:
>>>
 Hi,

 I'm taking a look at the Check and Set functionalities provided by the
 cas() API provided by cassandra 2.0 (the code available on git). I'm
 running a few tests on a small sized cluster (replication factor 3,
 consistency level quorum) with a few clients. I've observed a lot of cases
 seem to hit the TimedOutException while paxos is in progress. On inspection
 of the server side logs the following stack was seen:

 ERROR [Thread-582] 2013-07-21 01:17:46,169 CassandraDaemon.java (line
 196) Exception in thread Thread[Thread-582,5,main]
 java.lang.AssertionError: Added column does not sort as the last column
 at
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
 at
 org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:119)
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:96)
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:91)
 at
 org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:139)
 at
 org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:128)
 at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:175)
 at
 org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
 at
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)

 I was updating multiple columns using the same cas() call, and looking
 at the assert (code below) I added code to sort them before sending.
 However, now I'm seeing the same issue *even with single column* cas()
 calls.

 int c =
 internalComparator().compare(columns.get(getColumnCount() - 1).name(),
 column.name());
 // note that we want an assertion here (see addColumn javadoc),
 but we also want that if
 // assertion are disabled, addColumn works correctly with
 unsorted input
 assert c <= 0 : "Added column does not sort as the " +
 (reversed ? "first" : "last") + " column";

 I have also seen this error occur in the getRow() path, as below:

 java.lang.AssertionError: Added column does not sort as the last column
 at
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
 at
 org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
 at
 org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:151)
 at
 org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:95)
 at
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:57)
 at
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1452)
 at
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1281)
 at
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1195)
 at org.apache.cassandra.db.Table.getRow(Table.java:331)
 at
 org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
 at
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runM

RE: Cassandra 2 vs Java 1.6

2013-07-22 Thread Andrew Cobley
Actually, it looks like it may not compile under 1.6 either.   build.xml has 
the following:





BTW this came because I'm setting up a little test cluster on three spare mac 
computers in our labs. They haven't been updated for sometime and are running 
MacOs 10.6.8. Trying to install JDK 7u25 is a no go, only supported on 10.7.3.

It's not a major issue, just a  warning that older Mac users may find this a 
problem.

Andy



From: Michał Michalski [mich...@opera.com]
Sent: 22 July 2013 09:14
To: user@cassandra.apache.org
Subject: Re: Cassandra 2 vs Java 1.6

I believe it won't run on 1.6. Java 1.7 is required to compile C* 2.0+
and once it's done, you cannot run it using Java 1.6 (this is what
"Unsupported major.minor version" error tells you about; java version 50
is 1.6 and 51 is 1.7).

M.

W dniu 22.07.2013 10:06, Andrew Cobley pisze:
> I know it was decided to drop the requirement for Java 1.6 for cassandra some 
> time ago, but my question is should 
> 2.0.0-beta1
>  run under java 1.6 at all ?  I tried and got the following error:
>
>
> macaroon:bin administrator$ Exception in thread "main" 
> java.lang.UnsupportedClassVersionError: 
> org/apache/cassandra/service/CassandraDaemon : Unsupported major.minor 
> version 51.0
>  at java.lang.ClassLoader.defineClass1(Native Method)
>  at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
>  at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>  at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
>  at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>  at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
>  at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>
> macaroon:bin administrator$ java -version
> java version "1.6.0_43"
> Java(TM) SE Runtime Environment (build 1.6.0_43-b01-447-10M4203)
> Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01-447, mixed mode)
> macaroon:bin administrator$
>
> It's fine by me if thats the case !
>
> Andy
>
>
> The University of Dundee is a registered Scottish Charity, No: SC015096
>



The University of Dundee is a registered Scottish Charity, No: SC015096



Re: Cassandra 2.0 - AssertionError in ArrayBackedSortedColumns

2013-07-22 Thread Soumava Ghosh
Thanks for the confirmation!

It looks like the code is capable of handling the data in unsorted order
with assertions disabled; so for the time being I will disable the
assertion for my tests, until the fixes are available.

Thanks,
Soumava


On Mon, Jul 22, 2013 at 1:33 AM, Sylvain Lebresne wrote:

> Actually, your second stack trace is due to
> https://issues.apache.org/jira/browse/CASSANDRA-5788.
>
>
> On Mon, Jul 22, 2013 at 9:37 AM, Sylvain Lebresne wrote:
>
>>
>>  unless data was stored in incorrect order and I don't know if that is
>>> possible through the API..
>>>
>>
>> That's exactly what happens and that's why I say it's a bug.
>>
>>
>>>
>>> ii. I am also seeing some sporadic cases where a single column cas()
>>> update is hitting this same issue (the bug you mentioned) as multiple
>>> columns, do you see any reason for that happening?
>>>
>>
>> That would be slightly weirder, but frankly that's almost surely due to
>> https://issues.apache.org/jira/browse/CASSANDRA-5786 too.
>>
>> --
>> Sylvain
>>
>>
>>> Thank you,
>>> Soumava
>>>
>>>
>>> On Mon, Jul 22, 2013 at 12:11 AM, Sylvain Lebresne >> > wrote:
>>>
 This is a bug really:
 https://issues.apache.org/jira/browse/CASSANDRA-5786.

 This should get fixed in the next beta of 2.0, but if you really want
 to test CAS updates in the meantime, you'll have to provide the columns in
 (column family comparator) sorted order to the thrift cas() method.

 --
 Sylvain


 On Sun, Jul 21, 2013 at 1:22 PM, Soumava Ghosh 
 wrote:

> Hi,
>
> I'm taking a look at the Check and Set functionalities provided by the
> cas() API provided by cassandra 2.0 (the code available on git). I'm
> running a few tests on a small sized cluster (replication factor 3,
> consistency level quorum) with a few clients. I've observed a lot of cases
> seem to hit the TimedOutException while paxos is in progress. On 
> inspection
> of the server side logs the following stack was seen:
>
> ERROR [Thread-582] 2013-07-21 01:17:46,169 CassandraDaemon.java (line
> 196) Exception in thread Thread[Thread-582,5,main]
> java.lang.AssertionError: Added column does not sort as the last column
> at
> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
> at
> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:119)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:96)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:91)
> at
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:139)
> at
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:128)
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
> at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:175)
> at
> org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
> at
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
>
> I was updating multiple columns using the same cas() call, and looking
> at the assert (code below) I added code to sort them before sending.
> However, now I'm seeing the same issue *even with single column*cas() 
> calls.
>
> int c =
> internalComparator().compare(columns.get(getColumnCount() - 1).name(),
> column.name());
> // note that we want an assertion here (see addColumn
> javadoc), but we also want that if
> // assertion are disabled, addColumn works correctly with
> unsorted input
> assert c <= 0 : "Added column does not sort as the " +
> (reversed ? "first" : "last") + " column";
>
> I have also seen this error occur in the getRow() path, as below:
>
> java.lang.AssertionError: Added column does not sort as the last column
> at
> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
> at
> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:117)
> at
> org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:151)
> at
> org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:95)
> at
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:57)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1452)
> at
> org.apache.cassandra.db.Col

Strange cassandra-stress results with 2.0.0 beta1

2013-07-22 Thread Andrew Cobley
I've been noticing some strange casandra-stress results with 2.0.0 beta 1.  
I've set up a single node on a Mac (4 gig ram, 2.8Ghz core 2 duo) and installed 
2.0.0 beta1.

When I run ./cassandra-stress  -d 134.36.36.218 I'm seeing the interval-op-rate 
drop from a peek of 11562 at the start to 0 -20 after about 358 seconds.  
Here's the output:

./cassandra-stress  -d 134.36.36.218
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency,95th,99th,elapsed_time
13629,1362,1362,15.3,117.6,531.1,10
37545,2391,2391,9.4,74.0,438.4,20
105410,6786,6786,3.3,38.0,216.4,30
209014,10360,10360,2.7,26.2,216.5,41
320024,11101,11101,2.5,14.5,216.5,51
430216,11019,11019,2.4,10.5,215.4,61
531905,10168,10168,2.4,8.3,177.1,72
619672,8776,8776,2.4,6.2,145.8,82
653977,3430,3430,2.4,5.9,145.4,92
682648,2867,2867,2.4,6.1,145.4,102
692771,1012,1012,2.4,6.1,145.4,113
712163,1939,1939,2.4,6.0,4461.7,123
723761,1159,1159,2.4,6.0,4461.7,133
731115,735,735,2.4,6.1,4461.7,143
737627,651,651,2.4,6.2,4461.7,153
743853,622,622,2.4,7.0,5056.1,164
747345,349,349,2.4,7.0,5056.1,174
755501,815,815,2.5,6.9,6351.7,184
758692,319,319,2.5,6.9,5056.1,195
763960,526,526,2.5,7.3,5292.6,205
767966,400,400,2.5,7.4,5292.6,215
769514,154,154,2.5,7.4,5292.6,225
773492,397,397,2.5,7.2,6435.7,236
775374,188,188,2.5,7.4,5292.6,246
776035,66,66,2.6,7.5,5347.1,256
777896,186,186,2.6,7.9,6407.3,266
778791,89,89,2.6,8.0,6438.9,276
779646,85,85,2.6,8.0,6523.2,287
780785,113,113,2.6,11.1,6523.2,297
781336,55,55,2.6,11.7,6523.2,307
781684,34,34,2.6,13.3,16734.0,317
781818,13,13,2.7,13.9,16764.2,328
781952,13,13,2.7,13.9,16900.1,338
782195,24,24,2.7,15.3,16900.1,348
782294,9,9,2.7,15.3,16900.1,358
782362,6,6,2.7,15.5,21487.3,368
782589,22,22,2.7,20.1,21487.3,379
782775,18,18,2.7,28.8,21487.3,389
783012,23,23,2.8,29.4,21487.3,399
783248,23,23,2.8,68.2,21487.3,409
783270,2,2,2.8,68.2,21487.3,419
783283,1,1,2.8,68.2,21487.3,430
783283,0,0,2.8,68.2,21487.3,440
783337,5,5,2.8,198.6,42477.2,450
783492,15,15,2.9,5057.6,52687.3,460
783581,8,8,2.9,5349.0,59306.9,471
783807,22,22,3.0,6429.1,59306.9,481
783816,0,0,3.0,6438.3,59306.9,491
783945,12,12,3.2,11566.9,59306.9,501
783962,1,1,3.2,11612.0,59306.9,512
783962,0,0,3.2,11612.0,59306.9,522


I installed 1.2.6 for comparision and that gave  expected results:

./cassandra-stress  -d 134.36.36.218
Unable to create stress keyspace: Keyspace names must be case-insensitively 
unique ("Keyspace1" conflicts with "Keyspace1")
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
22158,2215,2215,0.8,8.9,549.0,10
64192,4203,4203,1.0,7.0,486.2,20
114236,5004,5004,1.3,6.5,411.9,30
178407,6417,6417,1.4,5.5,411.9,40
281714,10330,10330,2.2,5.1,409.6,51
371684,8997,8997,2.4,5.1,407.5,61
464401,9271,9271,2.5,5.0,236.7,72
562797,9839,9839,2.6,5.2,87.5,82
655755,9295,9295,2.6,5.2,87.5,92
751560,9580,9580,2.6,4.7,179.9,103
848022,9646,9646,2.6,4.5,826.0,113
914539,6651,6651,2.6,4.7,823.2,123
100,8546,8546,2.6,5.1,1048.5,133

Any ideas whats causing the slow down ?

Andy


The University of Dundee is a registered Scottish Charity, No: SC015096


R: Re: CL1 and CLQ with 5 nodes cluster and 3 alives node

2013-07-22 Thread cbert...@libero.it
Hi Aaron, thanks for your help.

>If you have more than 500Million rows you may want to check the 
bloom_filter_fp_chance, the old default was 0.000744 and the new (post 1.) 
number is > 0.01 for sized tiered. 

I really don't think I have more than 500 million rows ... any smart way to 
count rows number inside the ks?

>> Now a question -- why with 2 nodes offline all my application stop 
providing 
>> the service, even when a Consistency Level One read is invoked?

>What error did the client get and what client are you using ? 
>it also depends on if/how the node fails. The later versions try to shut down 
when there is an OOM, not sure what 1.0 does. 

The exception was a TTransportException -- I am using Pelops client.

>Is the node went into a zombie state the clients may have been timing out. 
The should then move onto to another node. 
>If it had started shutting down the client should have gotten some immediate 
errors. 

It didn't shut down, it was more like in a zombie state,
One more question: I'm experiencing some wrong counters (which are very 
important in my platform since the are used to keep user-points and generate 
the TopX users) --could it be related with this problem? The problem is that in 
some users (not all) the counter column increased its value.

After such a crash in 1.0 is there any best-practice to follow? (nodetool or 
something?)

Cheers,
Carlo

>
>Cheers
>
>
>-
>Aaron Morton
>Cassandra Consultant
>New Zealand
>
>@aaronmorton
>http://www.thelastpickle.com
>
>On 19/07/2013, at 5:02 PM, cbert...@libero.it wrote:
>
>> Hi all,
>> I'm experiencing some problems after 3 years of cassandra in production 
(from 
>> 0.6 to 1.0.6) -- for 2 times in 3 weeks 2 nodes crashed with OutOfMemory 
>> Exception.
>> In the log I can read the warn about the few heap available ... now I'm 
>> increasing a little bit my RAM, my Java Heap (1/4 of the RAM) and reducing 
the 
>> size of rows and memtables thresholds. Other tips?
>> 
>> Now a question -- why with 2 nodes offline all my application stop 
providing 
>> the service, even when a Consistency Level One read is invoked?
>> I'd expected this behaviour:
>> 
>> CL1 operations keep working
>> more than 80% of CLQ operations working (nodes offline where 2 and 5 in a 
>> clockwise key distribution only writes to fifth node should impact to node 
2)
>> most of all CLALL operations (that I don't use) failing
>> 
>> The situation instead was that I had ALL services stop responding throwing 
a 
>> TTransportException ...
>> 
>> Thanks in advance
>> 
>> Carlo
>
>




sstable size change

2013-07-22 Thread Keith Wright
Hi all,

   I know there has been several threads recently on this but I wanted to make 
sure I got a clear answer:  we are looking to increase our SSTable size for a 
couple of our LCS tables as well as chunk size (to match the SSD block size).   
The largest table is at 500 GB across 6 nodes (RF 3, C* 1.2.4 VNodes).  I 
wanted to get feedback on the best way to make this change with minimal load 
impact on the cluster.  After I make the change, I understand that I need to 
force the nodes to re-compact the tables.

Can this be done via upgrade sstables or do I need to shutdown the node, delete 
the .json file, and restart as some have suggested?

I assume I can do this one node at a time?

If I change the bloom filter size, I assume I will need to force compaction 
again?  Using the same methodology?

Thank you


Re: Auto Discovery of Hosts by Clients

2013-07-22 Thread Shahab Yunus
Thanks for you replies.

Regards,
Shahab


On Sun, Jul 21, 2013 at 4:49 PM, aaron morton wrote:

> Give the app the same nodes you have in the seed lists.
>
> Cheers
>
> -
> Aaron Morton
> Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/07/2013, at 9:32 AM, sankalp kohli  wrote:
>
> With Auto discovery, you can provide the DC you are local to and it will
> only use hosts from that.
>
>
> On Fri, Jul 19, 2013 at 2:08 PM, Shahab Yunus wrote:
>
>> Hello,
>>
>> I want my Thrift client(s) (using hector 1.1-3) to randomly connect to
>> any node in the Cassandra (1.2.4) cluster.
>>
>> 1- One way is that I pass in a comma separated list of hosts and ports to
>> the CassandraHostConfguration object.
>> 2- The other option is that I configure the auto discovery of hosts
>> (through setAutoDiscoverHost and related methods) on
>> CassandraHostConfguration object while passing only one pair of host/port.
>>
>> Is one way better than another or both have their pros and cons according
>> to the usecase. In case of 1, it can become unwieldy if the cluster grows.
>> In number 2, would I have to be extra careful while adding/removing nodes
>> (will it conflict with bootstrapping) or is it business as usual?
>>
>> I don't expect to have a multi-DC setup for near future but I believe
>> that would be one consideration.
>>
>> Is there any other method that I am missing? Is it dependent or varies
>> with the client API that I am using?
>>
>>
>> Thanks a lot.
>>
>> Regards,
>> Shahab
>>
>
>
>


OPP seems completely unsupported in Cassandra 1.2.5

2013-07-22 Thread Vara Kumar
We were using 0.7.6 version. And upgraded to 1.2.5 today. We were using OPP
(OrderPreservingPartitioner).

OPP throws error when any node join the cluster. Cluster can not be brought
up due to this error. After digging little deep, We realized that "peers"
column family is defined with key as type "inet". Looks like many other
column families in system keyspace has same issue.

- I know that OPP is deprecated. Is it that OPP completely unsupported? Is
it stated in upgrade instructions or some where? Did we miss it?
- I could not find any related discussion or jira records about similar
issue.


Exception trace:
java.lang.RuntimeException: The provided key was not UTF8 encoded.
at
org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:172)
at
org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44)
at org.apache.cassandra.db.Table.apply(Table.java:379)
at org.apache.cassandra.db.Table.apply(Table.java:353)
at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:258)
at
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:117)
at
org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:172)
at org.apache.cassandra.db.SystemTable.updatePeerInfo(SystemTable.java:258)
at
org.apache.cassandra.service.StorageService.onChange(StorageService.java:1231)
at
org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1948)
at
org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:823)
at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:901)
at
org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:50)
at
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.nio.charset.MalformedInputException: Input length = 1
at java.nio.charset.CoderResult.throwException(CoderResult.java:260)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:781)
at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:167)
at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:124)
at
org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:168)


Cassandra Out of Memory on startup while reading cache

2013-07-22 Thread Jason Tyler
Hello,

Since upgrading from 1.1.9 to 1.2.6 over the last week, we've had two instances 
where cassandra was unable, but kept trying to restart:

SNIP
 INFO [main] 2013-07-19 16:12:36,769 AutoSavingCache.java (line 140) reading 
saved cache /var/cassandra/caches/SyncCore-CommEvents-KeyCache-b.db
ERROR [main] 2013-07-19 16:12:36,966 CassandraDaemon.java (line 458) Exception 
encountered during startup
java.lang.OutOfMemoryError: Java heap space
at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:394)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:379)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:145)
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:266)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:382)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:354)
at org.apache.cassandra.db.Table.initCf(Table.java:329)
at org.apache.cassandra.db.Table.(Table.java:272)
at org.apache.cassandra.db.Table.open(Table.java:109)
at org.apache.cassandra.db.Table.open(Table.java:87)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:271)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:441)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:484)
 INFO [main] 2013-07-19 16:12:43,288 CassandraDaemon.java (line 118) Logging 
initialized
SNIP

This is new behavior with 1.2.6.

Stopping cassandra, moving the offending file, then starting cassandra does 
succeed.

Any config suggestions (key cache config?) to prevent this from happening?

THX


Cheers,

~Jason


Re: Re: CL1 and CLQ with 5 nodes cluster and 3 alives node

2013-07-22 Thread Nate McCall
Do you have a copy of the specific stack trace? Given the version and
CL behavior, one thing you may be experiencing is:
https://issues.apache.org/jira/browse/CASSANDRA-4578

On Mon, Jul 22, 2013 at 7:15 AM, cbert...@libero.it  wrote:
> Hi Aaron, thanks for your help.
>
>>If you have more than 500Million rows you may want to check the
> bloom_filter_fp_chance, the old default was 0.000744 and the new (post 1.)
> number is > 0.01 for sized tiered.
>
> I really don't think I have more than 500 million rows ... any smart way to
> count rows number inside the ks?
>
>>> Now a question -- why with 2 nodes offline all my application stop
> providing
>>> the service, even when a Consistency Level One read is invoked?
>
>>What error did the client get and what client are you using ?
>>it also depends on if/how the node fails. The later versions try to shut down
> when there is an OOM, not sure what 1.0 does.
>
> The exception was a TTransportException -- I am using Pelops client.
>
>>Is the node went into a zombie state the clients may have been timing out.
> The should then move onto to another node.
>>If it had started shutting down the client should have gotten some immediate
> errors.
>
> It didn't shut down, it was more like in a zombie state,
> One more question: I'm experiencing some wrong counters (which are very
> important in my platform since the are used to keep user-points and generate
> the TopX users) --could it be related with this problem? The problem is that 
> in
> some users (not all) the counter column increased its value.
>
> After such a crash in 1.0 is there any best-practice to follow? (nodetool or
> something?)
>
> Cheers,
> Carlo
>
>>
>>Cheers
>>
>>
>>-
>>Aaron Morton
>>Cassandra Consultant
>>New Zealand
>>
>>@aaronmorton
>>http://www.thelastpickle.com
>>
>>On 19/07/2013, at 5:02 PM, cbert...@libero.it wrote:
>>
>>> Hi all,
>>> I'm experiencing some problems after 3 years of cassandra in production
> (from
>>> 0.6 to 1.0.6) -- for 2 times in 3 weeks 2 nodes crashed with OutOfMemory
>>> Exception.
>>> In the log I can read the warn about the few heap available ... now I'm
>>> increasing a little bit my RAM, my Java Heap (1/4 of the RAM) and reducing
> the
>>> size of rows and memtables thresholds. Other tips?
>>>
>>> Now a question -- why with 2 nodes offline all my application stop
> providing
>>> the service, even when a Consistency Level One read is invoked?
>>> I'd expected this behaviour:
>>>
>>> CL1 operations keep working
>>> more than 80% of CLQ operations working (nodes offline where 2 and 5 in a
>>> clockwise key distribution only writes to fifth node should impact to node
> 2)
>>> most of all CLALL operations (that I don't use) failing
>>>
>>> The situation instead was that I had ALL services stop responding throwing
> a
>>> TTransportException ...
>>>
>>> Thanks in advance
>>>
>>> Carlo
>>
>>
>
>


Re: Cassandra Out of Memory on startup while reading cache

2013-07-22 Thread Janne Jalkanen

Sounds like this: https://issues.apache.org/jira/browse/CASSANDRA-5706, which 
is fixed in 1.2.7.

/Janne

On 22 Jul 2013, at 20:40, Jason Tyler  wrote:

> Hello,
> 
> Since upgrading from 1.1.9 to 1.2.6 over the last week, we've had two 
> instances where cassandra was unable, but kept trying to restart:
> 
> SNIP
>  INFO [main] 2013-07-19 16:12:36,769 AutoSavingCache.java (line 140) reading 
> saved cache /var/cassandra/caches/SyncCore-CommEvents-KeyCache-b.db
> ERROR [main] 2013-07-19 16:12:36,966 CassandraDaemon.java (line 458) 
> Exception encountered during startup
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:394)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:379)
> at 
> org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:145)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:266)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:382)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:354)
> at org.apache.cassandra.db.Table.initCf(Table.java:329)
> at org.apache.cassandra.db.Table.(Table.java:272)
> at org.apache.cassandra.db.Table.open(Table.java:109)
> at org.apache.cassandra.db.Table.open(Table.java:87)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:271)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:441)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:484)
>  INFO [main] 2013-07-19 16:12:43,288 CassandraDaemon.java (line 118) Logging 
> initialized
> SNIP
> 
> This is new behavior with 1.2.6.  
> 
> Stopping cassandra, moving the offending file, then starting cassandra does 
> succeed.  
> 
> Any config suggestions (key cache config?) to prevent this from happening?
> 
> THX
> 
> 
> Cheers,
> 
> ~Jason



Re: sstable size change

2013-07-22 Thread Janne Jalkanen

I don't think upgradesstables is enough, since it's more of a "change this file 
to a new format but don't try to merge sstables and compact" -thing.

Deleting the .json -file is probably the only way, but someone more familiar 
with cassandra LCS might be able to tell whether manually editing the json file 
so that you drop all sstables a level might work? Since they would overflow the 
new level, they would compact soon, but the impact might be less drastic than 
just deleting the .json file (which takes everything to L0)...

/Janne

On 22 Jul 2013, at 16:02, Keith Wright  wrote:

> Hi all,
> 
>I know there has been several threads recently on this but I wanted to 
> make sure I got a clear answer:  we are looking to increase our SSTable size 
> for a couple of our LCS tables as well as chunk size (to match the SSD block 
> size).   The largest table is at 500 GB across 6 nodes (RF 3, C* 1.2.4 
> VNodes).  I wanted to get feedback on the best way to make this change with 
> minimal load impact on the cluster.  After I make the change, I understand 
> that I need to force the nodes to re-compact the tables.  
> 
> Can this be done via upgrade sstables or do I need to shutdown the node, 
> delete the .json file, and restart as some have suggested?  
> 
> I assume I can do this one node at a time?
> 
> If I change the bloom filter size, I assume I will need to force compaction 
> again?  Using the same methodology?
> 
> Thank you



Cassandra 2.0 : Ant build issue

2013-07-22 Thread Soumava Ghosh
Hi,

I'm working on a Mac OS 10.8 and trying to build the cassandra trunk using
ant. I am getting the error as below. I can see a related bug that fixed a
similar issue for debian (
https://issues.apache.org/jira/browse/CASSANDRA-5688), but I can still
repro this on Mac.

Thanks,
Soumava

soumava$ git describe
cassandra-2.0.0-beta1-100-ge0eacd2
...
soumava$ ant
...
gen-cql3-grammar:
 [echo] Building Grammar
/Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
 ...

build-project:
 [echo] apache-cassandra:
/Users/soumava/Documents/src/git-cassandra/build.xml
[javac] Compiling 17 source files to
/Users/soumava/Documents/src/git-cassandra/build/classes/thrift
[javac] javac: invalid target release: 1.7
[javac] Usage: javac  
[javac] use -help for a list of possible options

BUILD FAILED
/Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed;
see the compiler error output for details.

Total time: 7 seconds


Re: sstable size change

2013-07-22 Thread Andrew Bialecki
My understanding is deleting the .json metadata file is the only way
currently. If you search the user list archives, there are folks who are
building tools to force compaction and rebuild sstables with the new size.
I believe there's been a bit of talk of potentially including those tools
as a pat of a future release.

Also, to answer your question about bloom filters, those are handled
differently and if you run upgradesstables after altering the BF FP ratio,
that will rebuild the BFs for each sstable.


On Mon, Jul 22, 2013 at 2:49 PM, Janne Jalkanen wrote:

>
> I don't think upgradesstables is enough, since it's more of a "change this
> file to a new format but don't try to merge sstables and compact" -thing.
>
> Deleting the .json -file is probably the only way, but someone more
> familiar with cassandra LCS might be able to tell whether manually editing
> the json file so that you drop all sstables a level might work? Since they
> would overflow the new level, they would compact soon, but the impact might
> be less drastic than just deleting the .json file (which takes everything
> to L0)...
>
> /Janne
>
> On 22 Jul 2013, at 16:02, Keith Wright  wrote:
>
> Hi all,
>
>I know there has been several threads recently on this but I wanted to
> make sure I got a clear answer:  we are looking to increase our SSTable
> size for a couple of our LCS tables as well as chunk size (to match the SSD
> block size).   The largest table is at 500 GB across 6 nodes (RF 3, C*
> 1.2.4 VNodes).  I wanted to get feedback on the best way to make this
> change with minimal load impact on the cluster.  After I make the change, I
> understand that I need to force the nodes to re-compact the tables.
>
> Can this be done via upgrade sstables or do I need to shutdown the node,
> delete the .json file, and restart as some have suggested?
>
> I assume I can do this one node at a time?
>
> If I change the bloom filter size, I assume I will need to force
> compaction again?  Using the same methodology?
>
> Thank you
>
>
>


Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Blair Zajac


On 07/22/2013 12:16 PM, Soumava Ghosh wrote:

Hi,

I'm working on a Mac OS 10.8 and trying to build the cassandra trunk
using ant. I am getting the error as below. I can see a related bug that
fixed a similar issue for debian
(https://issues.apache.org/jira/browse/CASSANDRA-5688), but I can still
repro this on Mac.


I did that fix for Linux and it bumped the required java on the box from 
1.6 to 1.7 since Cassandra 2.0 targets JDK 1.7 and later.  You'll need 
to put Oracle's JDK 1.7 on your Mac and set it as the default java.


http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

Blair



Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Soumava Ghosh
I'm using 1.7.0_21 (not 25 though)..

soumava$ java -version
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b12)
Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)

Thanks,
Soumava


On Mon, Jul 22, 2013 at 1:06 PM, Andrew Cobley wrote:

>  Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE
> 7u25) from oracle to do the compile.  See my message thread earlier today
> subject "Cassandra 2 vs Java 1.6 for a few more details.
>
>  Andy
>
>
>  On 22 Jul 2013, at 20:16, Soumava Ghosh  wrote:
>
>  Hi,
>
>  I'm working on a Mac OS 10.8 and trying to build the cassandra trunk
> using ant. I am getting the error as below. I can see a related bug that
> fixed a similar issue for debian (
> https://issues.apache.org/jira/browse/CASSANDRA-5688), but I can still
> repro this on Mac.
>
>  Thanks,
> Soumava
>
>  soumava$ git describe
> cassandra-2.0.0-beta1-100-ge0eacd2
> ...
> soumava$ ant
> ...
>  gen-cql3-grammar:
>  [echo] Building Grammar
> /Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
>  ...
>
>  build-project:
>  [echo] apache-cassandra:
> /Users/soumava/Documents/src/git-cassandra/build.xml
> [javac] Compiling 17 source files to
> /Users/soumava/Documents/src/git-cassandra/build/classes/thrift
> [javac] javac: invalid target release: 1.7
> [javac] Usage: javac  
> [javac] use -help for a list of possible options
>
>  BUILD FAILED
> /Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed;
> see the compiler error output for details.
>
>  Total time: 7 seconds
>
>
>
> The University of Dundee is a registered Scottish Charity, No: SC015096
>


Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Andrew Cobley
Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE 7u25) 
from oracle to do the compile.  See my message thread earlier today subject 
"Cassandra 2 vs Java 1.6 for a few more details.

Andy


On 22 Jul 2013, at 20:16, Soumava Ghosh 
mailto:soum...@cs.utexas.edu>> wrote:

Hi,

I'm working on a Mac OS 10.8 and trying to build the cassandra trunk using ant. 
I am getting the error as below. I can see a related bug that fixed a similar 
issue for debian (https://issues.apache.org/jira/browse/CASSANDRA-5688), but I 
can still repro this on Mac.

Thanks,
Soumava

soumava$ git describe
cassandra-2.0.0-beta1-100-ge0eacd2
...
soumava$ ant
...
gen-cql3-grammar:
 [echo] Building Grammar 
/Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
  ...

build-project:
 [echo] apache-cassandra: 
/Users/soumava/Documents/src/git-cassandra/build.xml
[javac] Compiling 17 source files to 
/Users/soumava/Documents/src/git-cassandra/build/classes/thrift
[javac] javac: invalid target release: 1.7
[javac] Usage: javac  
[javac] use -help for a list of possible options

BUILD FAILED
/Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed; see 
the compiler error output for details.

Total time: 7 seconds



The University of Dundee is a registered Scottish Charity, No: SC015096


Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Soumava Ghosh
Downloaded 1.7.0_25 and it still produces the following result:

soumava$ git describe
cassandra-2.0.0-beta1-100-ge0eacd2

soumava$ ant
...
build-project:
 [echo] apache-cassandra:
/Users/soumava/Documents/src/git-cassandra/build.xml
[javac] Compiling 17 source files to
/Users/soumava/Documents/src/git-cassandra/build/classes/thrift
[javac] javac: invalid target release: 1.7
[javac] Usage: javac  
[javac] use -help for a list of possible options

BUILD FAILED
/Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed;
see the compiler error output for details.

Total time: 1 second

soumava$ java -version
java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)

Thanks,
Soumava



On Mon, Jul 22, 2013 at 1:09 PM, Soumava Ghosh wrote:

> I'm using 1.7.0_21 (not 25 though)..
>
> soumava$ java -version
> java version "1.7.0_21"
> Java(TM) SE Runtime Environment (build 1.7.0_21-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)
>
> Thanks,
> Soumava
>
>
> On Mon, Jul 22, 2013 at 1:06 PM, Andrew Cobley wrote:
>
>>  Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE
>> 7u25) from oracle to do the compile.  See my message thread earlier today
>> subject "Cassandra 2 vs Java 1.6 for a few more details.
>>
>>  Andy
>>
>>
>>  On 22 Jul 2013, at 20:16, Soumava Ghosh  wrote:
>>
>>  Hi,
>>
>>  I'm working on a Mac OS 10.8 and trying to build the cassandra trunk
>> using ant. I am getting the error as below. I can see a related bug that
>> fixed a similar issue for debian (
>> https://issues.apache.org/jira/browse/CASSANDRA-5688), but I can still
>> repro this on Mac.
>>
>>  Thanks,
>> Soumava
>>
>>  soumava$ git describe
>> cassandra-2.0.0-beta1-100-ge0eacd2
>> ...
>> soumava$ ant
>> ...
>>  gen-cql3-grammar:
>>  [echo] Building Grammar
>> /Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
>>  ...
>>
>>  build-project:
>>  [echo] apache-cassandra:
>> /Users/soumava/Documents/src/git-cassandra/build.xml
>> [javac] Compiling 17 source files to
>> /Users/soumava/Documents/src/git-cassandra/build/classes/thrift
>> [javac] javac: invalid target release: 1.7
>> [javac] Usage: javac  
>> [javac] use -help for a list of possible options
>>
>>  BUILD FAILED
>> /Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed;
>> see the compiler error output for details.
>>
>>  Total time: 7 seconds
>>
>>
>>
>> The University of Dundee is a registered Scottish Charity, No: SC015096
>>
>
>


RE: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Andrew Cobley
What version do you get with

javac -version

?
Andy


From: soum...@utexas.edu [soum...@utexas.edu] on behalf of Soumava Ghosh 
[soum...@cs.utexas.edu]
Sent: 22 July 2013 21:09
To: user@cassandra.apache.org
Subject: Re: Cassandra 2.0 : Ant build issue

I'm using 1.7.0_21 (not 25 though)..

soumava$ java -version
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b12)
Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)

Thanks,
Soumava


On Mon, Jul 22, 2013 at 1:06 PM, Andrew Cobley 
mailto:a.e.cob...@dundee.ac.uk>> wrote:
Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE 7u25) 
from oracle to do the compile.  See my message thread earlier today subject 
"Cassandra 2 vs Java 1.6 for a few more details.

Andy


On 22 Jul 2013, at 20:16, Soumava Ghosh 
mailto:soum...@cs.utexas.edu>> wrote:

Hi,

I'm working on a Mac OS 10.8 and trying to build the cassandra trunk using ant. 
I am getting the error as below. I can see a related bug that fixed a similar 
issue for debian (https://issues.apache.org/jira/browse/CASSANDRA-5688), but I 
can still repro this on Mac.

Thanks,
Soumava

soumava$ git describe
cassandra-2.0.0-beta1-100-ge0eacd2
...
soumava$ ant
...
gen-cql3-grammar:
 [echo] Building Grammar 
/Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
  ...

build-project:
 [echo] apache-cassandra: 
/Users/soumava/Documents/src/git-cassandra/build.xml
[javac] Compiling 17 source files to 
/Users/soumava/Documents/src/git-cassandra/build/classes/thrift
[javac] javac: invalid target release: 1.7
[javac] Usage: javac  
[javac] use -help for a list of possible options

BUILD FAILED
/Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed; see 
the compiler error output for details.

Total time: 7 seconds



The University of Dundee is a registered Scottish Charity, No: SC015096


The University of Dundee is a registered Scottish Charity, No: SC015096


Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Blair Zajac

On 07/22/2013 01:18 PM, Soumava Ghosh wrote:

Downloaded 1.7.0_25 and it still produces the following result:

soumava$ git describe
cassandra-2.0.0-beta1-100-ge0eacd2

soumava$ ant
...
build-project:
  [echo] apache-cassandra:
/Users/soumava/Documents/src/git-cassandra/build.xml
 [javac] Compiling 17 source files to
/Users/soumava/Documents/src/git-cassandra/build/classes/thrift
 [javac] javac: invalid target release: 1.7
 [javac] Usage: javac  
 [javac] use -help for a list of possible options


What does `javac -version` print?

Blair




Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Soumava Ghosh
There you go:

soumava$ javac -version
javac 1.7.0_25

Thanks,
Soumava



On Mon, Jul 22, 2013 at 1:19 PM, Andrew Cobley wrote:

>  What version do you get with
>
> javac -version
>
> ?
> Andy
>
>  --
> *From:* soum...@utexas.edu [soum...@utexas.edu] on behalf of Soumava
> Ghosh [soum...@cs.utexas.edu]
> *Sent:* 22 July 2013 21:09
> *To:* user@cassandra.apache.org
> *Subject:* Re: Cassandra 2.0 : Ant build issue
>
>   I'm using 1.7.0_21 (not 25 though)..
>
> soumava$ java -version
> java version "1.7.0_21"
> Java(TM) SE Runtime Environment (build 1.7.0_21-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)
>
>  Thanks,
> Soumava
>
>
> On Mon, Jul 22, 2013 at 1:06 PM, Andrew Cobley wrote:
>
>> Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE
>> 7u25) from oracle to do the compile.  See my message thread earlier today
>> subject "Cassandra 2 vs Java 1.6 for a few more details.
>>
>>  Andy
>>
>>
>>  On 22 Jul 2013, at 20:16, Soumava Ghosh  wrote:
>>
>>  Hi,
>>
>>  I'm working on a Mac OS 10.8 and trying to build the cassandra trunk
>> using ant. I am getting the error as below. I can see a related bug that
>> fixed a similar issue for debian (
>> https://issues.apache.org/jira/browse/CASSANDRA-5688), but I can still
>> repro this on Mac.
>>
>>  Thanks,
>> Soumava
>>
>>  soumava$ git describe
>> cassandra-2.0.0-beta1-100-ge0eacd2
>> ...
>> soumava$ ant
>> ...
>>  gen-cql3-grammar:
>>  [echo] Building Grammar
>> /Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
>>  ...
>>
>>  build-project:
>>  [echo] apache-cassandra:
>> /Users/soumava/Documents/src/git-cassandra/build.xml
>> [javac] Compiling 17 source files to
>> /Users/soumava/Documents/src/git-cassandra/build/classes/thrift
>> [javac] javac: invalid target release: 1.7
>> [javac] Usage: javac  
>> [javac] use -help for a list of possible options
>>
>>  BUILD FAILED
>> /Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed;
>> see the compiler error output for details.
>>
>>  Total time: 7 seconds
>>
>>
>>
>>  The University of Dundee is a registered Scottish Charity, No: SC015096
>>
>
>
> The University of Dundee is a registered Scottish Charity, No: SC015096
>


RE: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Andrew Cobley
Hmm,

I'm not sure then, I built 2.0 beta 1 earlier today with 1.7.0_25.  One last 
thing, what have you got JAVA_HOME set to ?

Andy


From: soum...@utexas.edu [soum...@utexas.edu] on behalf of Soumava Ghosh 
[soum...@cs.utexas.edu]
Sent: 22 July 2013 21:21
To: user
Subject: Re: Cassandra 2.0 : Ant build issue

There you go:

soumava$ javac -version
javac 1.7.0_25

Thanks,
Soumava



On Mon, Jul 22, 2013 at 1:19 PM, Andrew Cobley 
mailto:a.e.cob...@dundee.ac.uk>> wrote:
What version do you get with

javac -version

?
Andy


From: soum...@utexas.edu 
[soum...@utexas.edu] on behalf of Soumava Ghosh 
[soum...@cs.utexas.edu]
Sent: 22 July 2013 21:09
To: user@cassandra.apache.org
Subject: Re: Cassandra 2.0 : Ant build issue

I'm using 1.7.0_21 (not 25 though)..

soumava$ java -version
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b12)
Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)

Thanks,
Soumava


On Mon, Jul 22, 2013 at 1:06 PM, Andrew Cobley 
mailto:a.e.cob...@dundee.ac.uk>> wrote:
Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE 7u25) 
from oracle to do the compile.  See my message thread earlier today subject 
"Cassandra 2 vs Java 1.6 for a few more details.

Andy


On 22 Jul 2013, at 20:16, Soumava Ghosh 
mailto:soum...@cs.utexas.edu>> wrote:

Hi,

I'm working on a Mac OS 10.8 and trying to build the cassandra trunk using ant. 
I am getting the error as below. I can see a related bug that fixed a similar 
issue for debian (https://issues.apache.org/jira/browse/CASSANDRA-5688), but I 
can still repro this on Mac.

Thanks,
Soumava

soumava$ git describe
cassandra-2.0.0-beta1-100-ge0eacd2
...
soumava$ ant
...
gen-cql3-grammar:
 [echo] Building Grammar 
/Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
  ...

build-project:
 [echo] apache-cassandra: 
/Users/soumava/Documents/src/git-cassandra/build.xml
[javac] Compiling 17 source files to 
/Users/soumava/Documents/src/git-cassandra/build/classes/thrift
[javac] javac: invalid target release: 1.7
[javac] Usage: javac  
[javac] use -help for a list of possible options

BUILD FAILED
/Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed; see 
the compiler error output for details.

Total time: 7 seconds



The University of Dundee is a registered Scottish Charity, No: SC015096


The University of Dundee is a registered Scottish Charity, No: SC015096


The University of Dundee is a registered Scottish Charity, No: SC015096


Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Soumava Ghosh
Thanks Andrew!

JAVA_HOME was the issue. It was not set, and I think that's why the build
was somehow defaulting to /Library/Java/Home which was a 1.6 JDK. It should
have been /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home.
Setting JAVA_HOME to this path unblocked the build.

If I may ask, isn't the Java setup supposed to update the environment
variable?


On Mon, Jul 22, 2013 at 1:25 PM, Andrew Cobley wrote:

>  Hmm,
>
> I'm not sure then, I built 2.0 beta 1 earlier today with 1.7.0_25.  One
> last thing, what have you got JAVA_HOME set to ?
>
> Andy
>
>  --
> *From:* soum...@utexas.edu [soum...@utexas.edu] on behalf of Soumava
> Ghosh [soum...@cs.utexas.edu]
> *Sent:* 22 July 2013 21:21
> *To:* user
>
> *Subject:* Re: Cassandra 2.0 : Ant build issue
>
>   There you go:
>
>  soumava$ javac -version
> javac 1.7.0_25
>
>  Thanks,
> Soumava
>
>
>
> On Mon, Jul 22, 2013 at 1:19 PM, Andrew Cobley wrote:
>
>>  What version do you get with
>>
>> javac -version
>>
>> ?
>> Andy
>>
>>  --
>> *From:* soum...@utexas.edu [soum...@utexas.edu] on behalf of Soumava
>> Ghosh [soum...@cs.utexas.edu]
>> *Sent:* 22 July 2013 21:09
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Cassandra 2.0 : Ant build issue
>>
>>I'm using 1.7.0_21 (not 25 though)..
>>
>> soumava$ java -version
>> java version "1.7.0_21"
>> Java(TM) SE Runtime Environment (build 1.7.0_21-b12)
>> Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)
>>
>>  Thanks,
>> Soumava
>>
>>
>> On Mon, Jul 22, 2013 at 1:06 PM, Andrew Cobley 
>> wrote:
>>
>>> Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE
>>> 7u25) from oracle to do the compile.  See my message thread earlier today
>>> subject "Cassandra 2 vs Java 1.6 for a few more details.
>>>
>>>  Andy
>>>
>>>
>>>  On 22 Jul 2013, at 20:16, Soumava Ghosh  wrote:
>>>
>>>  Hi,
>>>
>>>  I'm working on a Mac OS 10.8 and trying to build the cassandra trunk
>>> using ant. I am getting the error as below. I can see a related bug that
>>> fixed a similar issue for debian (
>>> https://issues.apache.org/jira/browse/CASSANDRA-5688), but I can still
>>> repro this on Mac.
>>>
>>>  Thanks,
>>> Soumava
>>>
>>>  soumava$ git describe
>>> cassandra-2.0.0-beta1-100-ge0eacd2
>>> ...
>>> soumava$ ant
>>> ...
>>>  gen-cql3-grammar:
>>>  [echo] Building Grammar
>>> /Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
>>>  ...
>>>
>>>  build-project:
>>>  [echo] apache-cassandra:
>>> /Users/soumava/Documents/src/git-cassandra/build.xml
>>> [javac] Compiling 17 source files to
>>> /Users/soumava/Documents/src/git-cassandra/build/classes/thrift
>>> [javac] javac: invalid target release: 1.7
>>> [javac] Usage: javac  
>>> [javac] use -help for a list of possible options
>>>
>>>  BUILD FAILED
>>> /Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile
>>> failed; see the compiler error output for details.
>>>
>>>  Total time: 7 seconds
>>>
>>>
>>>
>>>  The University of Dundee is a registered Scottish Charity, No: SC015096
>>>
>>
>>
>> The University of Dundee is a registered Scottish Charity, No: SC015096
>>
>
>
> The University of Dundee is a registered Scottish Charity, No: SC015096
>


Re: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Blair Zajac

On 07/22/2013 01:31 PM, Soumava Ghosh wrote:

Thanks Andrew!

JAVA_HOME was the issue. It was not set, and I think that's why the
build was somehow defaulting to /Library/Java/Home which was a 1.6 JDK.
It should have
been /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home.
Setting JAVA_HOME to this path unblocked the build.

If I may ask, isn't the Java setup supposed to update the environment
variable?


It looks like its ant doing this.  In ant 1.9.0's shell script wrapper, 
it'll use the OS's Java if JAVA_HOME isn't set (see below).


Blair


# OS specific support.  $var _must_ be set to either true or false.
cygwin=false;
darwin=false;
mingw=false;
case "`uname`" in
  CYGWIN*) cygwin=true ;;
  Darwin*) darwin=true
   if [ -z "$JAVA_HOME" ] ; then
   if [ -x '/usr/libexec/java_home' ] ; then
   JAVA_HOME=`/usr/libexec/java_home`
   elif [ -d 
"/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home" 
]; then


JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
   fi
   fi
   ;;
  MINGW*) mingw=true ;;
esac




RE: Cassandra 2.0 : Ant build issue

2013-07-22 Thread Andrew Cobley
>>If I may ask, isn't the Java setup supposed to update the environment 
>>variable?

Sadly I don't think it does!

Andy


From: soum...@utexas.edu [soum...@utexas.edu] on behalf of Soumava Ghosh 
[soum...@cs.utexas.edu]
Sent: 22 July 2013 21:31
To: user
Subject: Re: Cassandra 2.0 : Ant build issue

Thanks Andrew!

JAVA_HOME was the issue. It was not set, and I think that's why the build was 
somehow defaulting to /Library/Java/Home which was a 1.6 JDK. It should have 
been /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home. Setting 
JAVA_HOME to this path unblocked the build.

If I may ask, isn't the Java setup supposed to update the environment variable?


On Mon, Jul 22, 2013 at 1:25 PM, Andrew Cobley 
mailto:a.e.cob...@dundee.ac.uk>> wrote:
Hmm,

I'm not sure then, I built 2.0 beta 1 earlier today with 1.7.0_25.  One last 
thing, what have you got JAVA_HOME set to ?

Andy


From: soum...@utexas.edu 
[soum...@utexas.edu] on behalf of Soumava Ghosh 
[soum...@cs.utexas.edu]
Sent: 22 July 2013 21:21
To: user

Subject: Re: Cassandra 2.0 : Ant build issue

There you go:

soumava$ javac -version
javac 1.7.0_25

Thanks,
Soumava



On Mon, Jul 22, 2013 at 1:19 PM, Andrew Cobley 
mailto:a.e.cob...@dundee.ac.uk>> wrote:
What version do you get with

javac -version

?
Andy


From: soum...@utexas.edu 
[soum...@utexas.edu] on behalf of Soumava Ghosh 
[soum...@cs.utexas.edu]
Sent: 22 July 2013 21:09
To: user@cassandra.apache.org
Subject: Re: Cassandra 2.0 : Ant build issue

I'm using 1.7.0_21 (not 25 though)..

soumava$ java -version
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b12)
Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)

Thanks,
Soumava


On Mon, Jul 22, 2013 at 1:06 PM, Andrew Cobley 
mailto:a.e.cob...@dundee.ac.uk>> wrote:
Are you using JDK 1.6.  If so you'll need to get the 1.7 jdk (Java SE 7u25) 
from oracle to do the compile.  See my message thread earlier today subject 
"Cassandra 2 vs Java 1.6 for a few more details.

Andy


On 22 Jul 2013, at 20:16, Soumava Ghosh 
mailto:soum...@cs.utexas.edu>> wrote:

Hi,

I'm working on a Mac OS 10.8 and trying to build the cassandra trunk using ant. 
I am getting the error as below. I can see a related bug that fixed a similar 
issue for debian (https://issues.apache.org/jira/browse/CASSANDRA-5688), but I 
can still repro this on Mac.

Thanks,
Soumava

soumava$ git describe
cassandra-2.0.0-beta1-100-ge0eacd2
...
soumava$ ant
...
gen-cql3-grammar:
 [echo] Building Grammar 
/Users/soumava/Documents/src/git-cassandra/src/java/org/apache/cassandra/cql3/Cql.g
  ...

build-project:
 [echo] apache-cassandra: 
/Users/soumava/Documents/src/git-cassandra/build.xml
[javac] Compiling 17 source files to 
/Users/soumava/Documents/src/git-cassandra/build/classes/thrift
[javac] javac: invalid target release: 1.7
[javac] Usage: javac  
[javac] use -help for a list of possible options

BUILD FAILED
/Users/soumava/Documents/src/git-cassandra/build.xml:636: Compile failed; see 
the compiler error output for details.

Total time: 7 seconds



The University of Dundee is a registered Scottish Charity, No: SC015096


The University of Dundee is a registered Scottish Charity, No: SC015096


The University of Dundee is a registered Scottish Charity, No: SC015096


The University of Dundee is a registered Scottish Charity, No: SC015096


Re: sstable size change

2013-07-22 Thread sankalp kohli
You can remove the json file and that will be treated as all sstables are
now in L0. Since you have lot of data, the compaction will take a very long
time. See the comment below directly from Cassandra code. If you chose to
do this, you might want to increase the rate of compaction by usual means.
If you are on spinning, then it might be a very big problem.
During the time of compaction, the read performance will be impacted.

Unless there is a very urgent need to change the sstable size, I would
change the size and let it take it own course organically.



// LevelDB gives each level a score of how much data it contains vs its
ideal amount, and
// compacts the level with the highest score. But this falls apart
spectacularly once you
// get behind.  Consider this set of levels:
// L0: 988 [ideal: 4]
// L1: 117 [ideal: 10]
// L2: 12  [ideal: 100]
//
// The problem is that L0 has a much higher score (almost 250) than
L1 (11), so what we'll
// do is compact a batch of MAX_COMPACTING_L0 sstables with all 117
L1 sstables, and put the
// result (say, 120 sstables) in L1. Then we'll compact the next
batch of MAX_COMPACTING_L0,
// and so forth.  So we spend most of our i/o rewriting the L1 data
with each batch.
//
// If we could just do *all* L0 a single time with L1, that would
be ideal.  But we can't
// -- see the javadoc for MAX_COMPACTING_L0.
//
// LevelDB's way around this is to simply block writes if L0
compaction falls behind.
// We don't have that luxury.
//
// So instead, we
// 1) force compacting higher levels first, which minimizes the i/o
needed to compact
//optimially which gives us a long term win, and
// 2) if L0 falls behind, we will size-tiered compact it to reduce
read overhead until
//we can catch up on the higher levels.
//
// This isn't a magic wand -- if you are consistently writing too
fast for LCS to keep
// up, you're still screwed.  But if instead you have intermittent
bursts of activity,
// it can help a lot.


On Mon, Jul 22, 2013 at 12:51 PM, Andrew Bialecki  wrote:

> My understanding is deleting the .json metadata file is the only way
> currently. If you search the user list archives, there are folks who are
> building tools to force compaction and rebuild sstables with the new size.
> I believe there's been a bit of talk of potentially including those tools
> as a pat of a future release.
>
> Also, to answer your question about bloom filters, those are handled
> differently and if you run upgradesstables after altering the BF FP ratio,
> that will rebuild the BFs for each sstable.
>
>
> On Mon, Jul 22, 2013 at 2:49 PM, Janne Jalkanen 
> wrote:
>
>>
>> I don't think upgradesstables is enough, since it's more of a "change
>> this file to a new format but don't try to merge sstables and compact"
>> -thing.
>>
>> Deleting the .json -file is probably the only way, but someone more
>> familiar with cassandra LCS might be able to tell whether manually editing
>> the json file so that you drop all sstables a level might work? Since they
>> would overflow the new level, they would compact soon, but the impact might
>> be less drastic than just deleting the .json file (which takes everything
>> to L0)...
>>
>> /Janne
>>
>> On 22 Jul 2013, at 16:02, Keith Wright  wrote:
>>
>> Hi all,
>>
>>I know there has been several threads recently on this but I wanted to
>> make sure I got a clear answer:  we are looking to increase our SSTable
>> size for a couple of our LCS tables as well as chunk size (to match the SSD
>> block size).   The largest table is at 500 GB across 6 nodes (RF 3, C*
>> 1.2.4 VNodes).  I wanted to get feedback on the best way to make this
>> change with minimal load impact on the cluster.  After I make the change, I
>> understand that I need to force the nodes to re-compact the tables.
>>
>> Can this be done via upgrade sstables or do I need to shutdown the node,
>> delete the .json file, and restart as some have suggested?
>>
>> I assume I can do this one node at a time?
>>
>> If I change the bloom filter size, I assume I will need to force
>> compaction again?  Using the same methodology?
>>
>> Thank you
>>
>>
>>
>


cassandra 1.2.6 -> Start key's token sorts after end token

2013-07-22 Thread Marcelo Elias Del Valle
Hello,

I am trying to figure what might be cause this error. I am using
Cassandra 1.2.6 (tried with 1.2.3 as well) and I am trying to read data
from cassandra on hadoop using column family input format. I also got the
same error using pure astyanax on a test.
I am using Murmur3Partitioner and I created the keyspace using
Cassandra 1.2.6, there is nothing from prior versions. I created the
keyspace with SimpleStrategy and replication factor 1.
Here is the exception I am getting:
2013-07-22 21:53:05,824 WARN org.apache.hadoop.mapred.Child (main): Error
running child
java.lang.RuntimeException: InvalidRequestException(why:Start key's token
sorts after end token)
at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.maybeInit(ColumnFamilyRecordReader.java:453)
at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.computeNext(ColumnFamilyRecordReader.java:459)
at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.computeNext(ColumnFamilyRecordReader.java:406)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getProgress(ColumnFamilyRecordReader.java:103)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:522)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:547)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:771)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: InvalidRequestException(why:Start key's token sorts after end
token)
at
org.apache.cassandra.thrift.Cassandra$get_paged_slice_result.read(Cassandra.java:14168)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at
org.apache.cassandra.thrift.Cassandra$Client.recv_get_paged_slice(Cassandra.java:769)
at
org.apache.cassandra.thrift.Cassandra$Client.get_paged_slice(Cassandra.java:753)
at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.maybeInit(ColumnFamilyRecordReader.java:438)
... 16 more
2013-07-22 21:53:05,828 INFO org.apache.hadoop.mapred.Task (main): Runnning
cleanup for the task

 Any hint?

Best regards,
-- 
Marcelo Elias Del Valle
http://mvalle.com - @mvallebr


how to find cassandra version using pycassa

2013-07-22 Thread F Q
I see only api version and schema version  API available. Any idea

Feng


Re: how to find cassandra version using pycassa

2013-07-22 Thread Nate McCall
Cassandra's version itself is not available from the API.

However, as you mention, describe_version provides the API version
from which you could deduce the approximate version of Cassandra. You
would just have to maintain this mapping yourself.

On Mon, Jul 22, 2013 at 6:37 PM, F Q  wrote:
> I see only api version and schema version  API available. Any idea
>
> Feng


Re: memtable overhead

2013-07-22 Thread Darren Smythe
The way weve gone about our data models has resulted in lots of column
families and just looking for guidelines about how much space each column
table adds.

TIA


On Sun, Jul 21, 2013 at 11:19 PM, Darren Smythe wrote:

> Hi,
>
> How much overhead (in heap MB) does an empty memtable use? If I have many
> column families that aren't written to often, how much memory do these take
> up?
>
> TIA
>
> -- Darren
>


Re: memtable overhead

2013-07-22 Thread Michał Michalski
Not sure how up-to-date this info is, but from some discussions that 
happened here long time ago I remember that a minimum of 1MB per 
Memtable needs to be allocated.


The other constraint here is memtable_total_space_in_mb setting in 
cassandra.yaml, which you might wish to tune when having a lot of CFs.


M.

W dniu 23.07.2013 07:12, Darren Smythe pisze:

The way weve gone about our data models has resulted in lots of column
families and just looking for guidelines about how much space each column
table adds.

TIA


On Sun, Jul 21, 2013 at 11:19 PM, Darren Smythe wrote:


Hi,

How much overhead (in heap MB) does an empty memtable use? If I have many
column families that aren't written to often, how much memory do these take
up?

TIA

-- Darren