Select with filtering

2014-04-25 Thread Mikhail Mazursky
Hello all,

I have the following schema:

CREATE TABLE my_table (
a varchar,
b varchar,
c int,
d varchar,
e uuid,
PRIMARY KEY ((a, b), c, d)
)

SELECT * FROM my_table WHERE a=? AND b=? AND e=? ALLOW FILTERING

The query above gives me the following exception message:

com.datastax.driver.core.exceptions.InvalidQueryException: No indexed
columns present in by-columns clause with Equal operator

SELECT * FROM my_table WHERE a=? AND b=?
Works fine and I see not reason why original query should not be able to do
such filtering.

If I add a secondary index by "e" column, then such query will work, but I
do not want to do that.

Cassandra 2.0.5
Driver 2.0.1

Is that a bug/not implemented feature? Or maybe I'm doing something wrong?

Kind regards,
Mikhail.


RE: Select with filtering

2014-04-25 Thread Paco Trujillo
Hi Mikhail

It is not a bug/not implemented feature and you are not doing nothing wrong.  
As you can check on the documentation you can only filter on a key name or a 
column that has a secondary index created on it:

http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/select_r.html

From: Mikhail Mazursky [mailto:ash...@gmail.com]
Sent: vrijdag 25 april 2014 11:01
To: user@cassandra.apache.org
Subject: Select with filtering

Hello all,
I have the following schema:

CREATE TABLE my_table (
a varchar,
b varchar,
c int,
d varchar,
e uuid,
PRIMARY KEY ((a, b), c, d)
)
SELECT * FROM my_table WHERE a=? AND b=? AND e=? ALLOW FILTERING

The query above gives me the following exception message:

com.datastax.driver.core.exceptions.InvalidQueryException: No indexed columns 
present in by-columns clause with Equal operator

SELECT * FROM my_table WHERE a=? AND b=?
Works fine and I see not reason why original query should not be able to do 
such filtering.
If I add a secondary index by "e" column, then such query will work, but I do 
not want to do that.

Cassandra 2.0.5
Driver 2.0.1
Is that a bug/not implemented feature? Or maybe I'm doing something wrong?

Kind regards,
Mikhail.


Recommended Approach for Config Changes

2014-04-25 Thread Phil Burress
If I wanted to make a configuration change to a single node in a cluster,
what is the recommended approach for doing that? Is it ok to just stop that
instance, make the change and then restart it?


Re: Bootstrap Timing

2014-04-25 Thread Phil Burress
Just a follow-up on this for any interested parties. Ultimately we've
determined that the bootstrap/join process is broken in Cassandra. We ended
up creating an entirely new cluster and migrating the data.


On Mon, Apr 21, 2014 at 10:32 AM, Phil Burress wrote:

> The new node has managed to stay up without dying for about 24 hours
> now... but it still is in JOINING state. A new concern has popped up. Disk
> usage is at 500GB on the new node. The three original nodes have about 40GB
> each. Any ideas why this is happening?
>
>
> On Sat, Apr 19, 2014 at 9:19 PM, Phil Burress wrote:
>
>> Thank you all for your advice and good info. The node has died a couple
>> of times with out of memory errors. I've restarted each time but it starts
>> re - running compaction and then dies again.
>>
>> Is there a better way to do this?
>> On Apr 18, 2014 6:06 PM, "Steven A Robenalt" 
>> wrote:
>>
>>> That's what I'd be doing, but I wouldn't expect it to run for 3 days
>>> this time. My guess is that whatever was going wrong with the bootstrap
>>> when you had 3 nodes starting at once was interfering with the completion
>>> of the 1 remaining node of those 3. A clean bootstrap of a single node
>>> should complete eventually, and I would think it'll be a lot less than 3
>>> days. Our database is much smaller than yours at the moment, so I can't
>>> really guide you on how long it should take, but I'd think that others on
>>> the list with similar database sizes might be able to give you a better
>>> idea.
>>>
>>> Steve
>>>
>>>
>>>
>>> On Fri, Apr 18, 2014 at 1:43 PM, Phil Burress 
>>> wrote:
>>>
 First, I just stopped 2 of the nodes and left one running. But this
 morning, I stopped that third node, cleared out the data, restarted and let
 it rejoin again. It appears streaming is done (according to netstats),
 right now it appears to be running compaction and building secondary index
 (according to compactionstats). Just sit and wait I guess?


 On Fri, Apr 18, 2014 at 2:23 PM, Steven A Robenalt <
 srobe...@stanford.edu> wrote:

> Looking back through this email chain, it looks like Phil said he
> wasn't using vnodes.
>
> For the record, we are using vnodes since we brought up our first
> cluster, and have not seen any issues with bootstrapping new nodes either
> to replace existing nodes, or to grow/shrink the cluster. We did adhere to
> the caveats that new nodes should not be seed nodes, and that we should
> allow each node to join the cluster completely before making any other
> changes.
>
> Phil, when you dropped to adding just the single node to your cluster,
> did you start over with the newly added node (blowing away the database
> created on the previous startup), or did you shut down the other 2 added
> nodes and leave the remaining one in progress to continue?
>
> Steve
>
>
> On Fri, Apr 18, 2014 at 10:40 AM, Robert Coli wrote:
>
>> On Fri, Apr 18, 2014 at 5:05 AM, Phil Burress <
>> philburress...@gmail.com> wrote:
>>
>>> nodetool netstats shows 84 files. They are all at 100%. Nothing
>>> showing in Pending or Active for Read Repair Stats.
>>>
>>> I'm assuming this means it's done. But it still shows "JOINING". Is
>>> there an undocumented step I'm missing here? This whole process seems
>>> broken to me.
>>>
>>
>> Lately it seems like a lot more people than usual are :
>>
>> 1) using vnodes
>> 2) unable to bootstrap new nodes
>>
>> If I were you, I would likely file a JIRA detailing your negative
>> experience with this core functionality.
>>
>> =Rob
>>
>>
>>
>
>
>
> --
> Steve Robenalt
> Software Architect
>  HighWire | Stanford University
> 425 Broadway St, Redwood City, CA 94063
>
> srobe...@stanford.edu
> http://highwire.stanford.edu
>
>
>
>
>
>

>>>
>>>
>>> --
>>> Steve Robenalt
>>> Software Architect
>>> HighWire | Stanford University
>>> 425 Broadway St, Redwood City, CA 94063
>>>
>>> srobe...@stanford.edu
>>> http://highwire.stanford.edu
>>>
>>>
>>>
>>>
>>>
>>>
>


Re: Recommended Approach for Config Changes

2014-04-25 Thread Chris Lohfink
Yes.

Some changes you can manually have take affect without a restart (ie 
compactionthroughput, things settable from jmx).  There is also config changes 
you cant really make like switching the snitch and such without a big todo.

---
Chris

On Apr 25, 2014, at 8:53 AM, Phil Burress  wrote:

> If I wanted to make a configuration change to a single node in a cluster, 
> what is the recommended approach for doing that? Is it ok to just stop that 
> instance, make the change and then restart it?



Re: Bootstrap Timing

2014-04-25 Thread James Rothering
What version of C* is this?


On Fri, Apr 25, 2014 at 6:55 AM, Phil Burress wrote:

> Just a follow-up on this for any interested parties. Ultimately we've
> determined that the bootstrap/join process is broken in Cassandra. We ended
> up creating an entirely new cluster and migrating the data.
>
>
> On Mon, Apr 21, 2014 at 10:32 AM, Phil Burress 
> wrote:
>
>> The new node has managed to stay up without dying for about 24 hours
>> now... but it still is in JOINING state. A new concern has popped up. Disk
>> usage is at 500GB on the new node. The three original nodes have about 40GB
>> each. Any ideas why this is happening?
>>
>>
>> On Sat, Apr 19, 2014 at 9:19 PM, Phil Burress 
>> wrote:
>>
>>> Thank you all for your advice and good info. The node has died a couple
>>> of times with out of memory errors. I've restarted each time but it starts
>>> re - running compaction and then dies again.
>>>
>>> Is there a better way to do this?
>>> On Apr 18, 2014 6:06 PM, "Steven A Robenalt" 
>>> wrote:
>>>
 That's what I'd be doing, but I wouldn't expect it to run for 3 days
 this time. My guess is that whatever was going wrong with the bootstrap
 when you had 3 nodes starting at once was interfering with the completion
 of the 1 remaining node of those 3. A clean bootstrap of a single node
 should complete eventually, and I would think it'll be a lot less than 3
 days. Our database is much smaller than yours at the moment, so I can't
 really guide you on how long it should take, but I'd think that others on
 the list with similar database sizes might be able to give you a better
 idea.

 Steve



 On Fri, Apr 18, 2014 at 1:43 PM, Phil Burress >>> > wrote:

> First, I just stopped 2 of the nodes and left one running. But this
> morning, I stopped that third node, cleared out the data, restarted and 
> let
> it rejoin again. It appears streaming is done (according to netstats),
> right now it appears to be running compaction and building secondary index
> (according to compactionstats). Just sit and wait I guess?
>
>
> On Fri, Apr 18, 2014 at 2:23 PM, Steven A Robenalt <
> srobe...@stanford.edu> wrote:
>
>> Looking back through this email chain, it looks like Phil said he
>> wasn't using vnodes.
>>
>> For the record, we are using vnodes since we brought up our first
>> cluster, and have not seen any issues with bootstrapping new nodes either
>> to replace existing nodes, or to grow/shrink the cluster. We did adhere 
>> to
>> the caveats that new nodes should not be seed nodes, and that we should
>> allow each node to join the cluster completely before making any other
>> changes.
>>
>> Phil, when you dropped to adding just the single node to your
>> cluster, did you start over with the newly added node (blowing away the
>> database created on the previous startup), or did you shut down the 
>> other 2
>> added nodes and leave the remaining one in progress to continue?
>>
>> Steve
>>
>>
>> On Fri, Apr 18, 2014 at 10:40 AM, Robert Coli 
>> wrote:
>>
>>> On Fri, Apr 18, 2014 at 5:05 AM, Phil Burress <
>>> philburress...@gmail.com> wrote:
>>>
 nodetool netstats shows 84 files. They are all at 100%. Nothing
 showing in Pending or Active for Read Repair Stats.

 I'm assuming this means it's done. But it still shows "JOINING". Is
 there an undocumented step I'm missing here? This whole process seems
 broken to me.

>>>
>>> Lately it seems like a lot more people than usual are :
>>>
>>> 1) using vnodes
>>> 2) unable to bootstrap new nodes
>>>
>>> If I were you, I would likely file a JIRA detailing your negative
>>> experience with this core functionality.
>>>
>>> =Rob
>>>
>>>
>>>
>>
>>
>>
>> --
>> Steve Robenalt
>> Software Architect
>>  HighWire | Stanford University
>> 425 Broadway St, Redwood City, CA 94063
>>
>> srobe...@stanford.edu
>> http://highwire.stanford.edu
>>
>>
>>
>>
>>
>>
>


 --
 Steve Robenalt
 Software Architect
 HighWire | Stanford University
 425 Broadway St, Redwood City, CA 94063

 srobe...@stanford.edu
 http://highwire.stanford.edu






>>
>


Re: Bootstrap Timing

2014-04-25 Thread Phil Burress
Cassandra 2.0.6


On Fri, Apr 25, 2014 at 10:31 AM, James Rothering wrote:

> What version of C* is this?
>
>
> On Fri, Apr 25, 2014 at 6:55 AM, Phil Burress wrote:
>
>> Just a follow-up on this for any interested parties. Ultimately we've
>> determined that the bootstrap/join process is broken in Cassandra. We ended
>> up creating an entirely new cluster and migrating the data.
>>
>>
>> On Mon, Apr 21, 2014 at 10:32 AM, Phil Burress 
>> wrote:
>>
>>> The new node has managed to stay up without dying for about 24 hours
>>> now... but it still is in JOINING state. A new concern has popped up. Disk
>>> usage is at 500GB on the new node. The three original nodes have about 40GB
>>> each. Any ideas why this is happening?
>>>
>>>
>>> On Sat, Apr 19, 2014 at 9:19 PM, Phil Burress 
>>> wrote:
>>>
 Thank you all for your advice and good info. The node has died a couple
 of times with out of memory errors. I've restarted each time but it starts
 re - running compaction and then dies again.

 Is there a better way to do this?
 On Apr 18, 2014 6:06 PM, "Steven A Robenalt" 
 wrote:

> That's what I'd be doing, but I wouldn't expect it to run for 3 days
> this time. My guess is that whatever was going wrong with the bootstrap
> when you had 3 nodes starting at once was interfering with the completion
> of the 1 remaining node of those 3. A clean bootstrap of a single node
> should complete eventually, and I would think it'll be a lot less than 3
> days. Our database is much smaller than yours at the moment, so I can't
> really guide you on how long it should take, but I'd think that others on
> the list with similar database sizes might be able to give you a better
> idea.
>
> Steve
>
>
>
> On Fri, Apr 18, 2014 at 1:43 PM, Phil Burress <
> philburress...@gmail.com> wrote:
>
>> First, I just stopped 2 of the nodes and left one running. But this
>> morning, I stopped that third node, cleared out the data, restarted and 
>> let
>> it rejoin again. It appears streaming is done (according to netstats),
>> right now it appears to be running compaction and building secondary 
>> index
>> (according to compactionstats). Just sit and wait I guess?
>>
>>
>> On Fri, Apr 18, 2014 at 2:23 PM, Steven A Robenalt <
>> srobe...@stanford.edu> wrote:
>>
>>> Looking back through this email chain, it looks like Phil said he
>>> wasn't using vnodes.
>>>
>>> For the record, we are using vnodes since we brought up our first
>>> cluster, and have not seen any issues with bootstrapping new nodes 
>>> either
>>> to replace existing nodes, or to grow/shrink the cluster. We did adhere 
>>> to
>>> the caveats that new nodes should not be seed nodes, and that we should
>>> allow each node to join the cluster completely before making any other
>>> changes.
>>>
>>> Phil, when you dropped to adding just the single node to your
>>> cluster, did you start over with the newly added node (blowing away the
>>> database created on the previous startup), or did you shut down the 
>>> other 2
>>> added nodes and leave the remaining one in progress to continue?
>>>
>>> Steve
>>>
>>>
>>> On Fri, Apr 18, 2014 at 10:40 AM, Robert Coli 
>>> wrote:
>>>
 On Fri, Apr 18, 2014 at 5:05 AM, Phil Burress <
 philburress...@gmail.com> wrote:

> nodetool netstats shows 84 files. They are all at 100%. Nothing
> showing in Pending or Active for Read Repair Stats.
>
> I'm assuming this means it's done. But it still shows "JOINING".
> Is there an undocumented step I'm missing here? This whole process 
> seems
> broken to me.
>

 Lately it seems like a lot more people than usual are :

 1) using vnodes
 2) unable to bootstrap new nodes

 If I were you, I would likely file a JIRA detailing your negative
 experience with this core functionality.

 =Rob



>>>
>>>
>>>
>>> --
>>> Steve Robenalt
>>> Software Architect
>>>  HighWire | Stanford University
>>> 425 Broadway St, Redwood City, CA 94063
>>>
>>> srobe...@stanford.edu
>>> http://highwire.stanford.edu
>>>
>>>
>>>
>>>
>>>
>>>
>>
>
>
> --
> Steve Robenalt
> Software Architect
> HighWire | Stanford University
> 425 Broadway St, Redwood City, CA 94063
>
> srobe...@stanford.edu
> http://highwire.stanford.edu
>
>
>
>
>
>
>>>
>>
>


Re: Bootstrap Timing

2014-04-25 Thread Steven A Robenalt
Interesting. I did our 2.0.3 -> 2.0.5 upgrade by bootstrapping/joining each
node into our cluster, one at a time, then retiring the old nodes one at a
time. Maybe something specific to the 2.0.6 release?

Good to hear that you've gotten through it anyway.

Steve



On Fri, Apr 25, 2014 at 7:49 AM, Phil Burress wrote:

> Cassandra 2.0.6
>
>
> On Fri, Apr 25, 2014 at 10:31 AM, James Rothering wrote:
>
>> What version of C* is this?
>>
>>
>> On Fri, Apr 25, 2014 at 6:55 AM, Phil Burress 
>> wrote:
>>
>>> Just a follow-up on this for any interested parties. Ultimately we've
>>> determined that the bootstrap/join process is broken in Cassandra. We ended
>>> up creating an entirely new cluster and migrating the data.
>>>
>>>
>>> On Mon, Apr 21, 2014 at 10:32 AM, Phil Burress >> > wrote:
>>>
 The new node has managed to stay up without dying for about 24 hours
 now... but it still is in JOINING state. A new concern has popped up. Disk
 usage is at 500GB on the new node. The three original nodes have about 40GB
 each. Any ideas why this is happening?


 On Sat, Apr 19, 2014 at 9:19 PM, Phil Burress >>> > wrote:

> Thank you all for your advice and good info. The node has died a
> couple of times with out of memory errors. I've restarted each time but it
> starts re - running compaction and then dies again.
>
> Is there a better way to do this?
> On Apr 18, 2014 6:06 PM, "Steven A Robenalt" 
> wrote:
>
>> That's what I'd be doing, but I wouldn't expect it to run for 3 days
>> this time. My guess is that whatever was going wrong with the bootstrap
>> when you had 3 nodes starting at once was interfering with the completion
>> of the 1 remaining node of those 3. A clean bootstrap of a single node
>> should complete eventually, and I would think it'll be a lot less than 3
>> days. Our database is much smaller than yours at the moment, so I can't
>> really guide you on how long it should take, but I'd think that others on
>> the list with similar database sizes might be able to give you a better
>> idea.
>>
>> Steve
>>
>>
>>
>> On Fri, Apr 18, 2014 at 1:43 PM, Phil Burress <
>> philburress...@gmail.com> wrote:
>>
>>> First, I just stopped 2 of the nodes and left one running. But this
>>> morning, I stopped that third node, cleared out the data, restarted and 
>>> let
>>> it rejoin again. It appears streaming is done (according to netstats),
>>> right now it appears to be running compaction and building secondary 
>>> index
>>> (according to compactionstats). Just sit and wait I guess?
>>>
>>>
>>> On Fri, Apr 18, 2014 at 2:23 PM, Steven A Robenalt <
>>> srobe...@stanford.edu> wrote:
>>>
 Looking back through this email chain, it looks like Phil said he
 wasn't using vnodes.

 For the record, we are using vnodes since we brought up our first
 cluster, and have not seen any issues with bootstrapping new nodes 
 either
 to replace existing nodes, or to grow/shrink the cluster. We did 
 adhere to
 the caveats that new nodes should not be seed nodes, and that we should
 allow each node to join the cluster completely before making any other
 changes.

 Phil, when you dropped to adding just the single node to your
 cluster, did you start over with the newly added node (blowing away the
 database created on the previous startup), or did you shut down the 
 other 2
 added nodes and leave the remaining one in progress to continue?

 Steve


 On Fri, Apr 18, 2014 at 10:40 AM, Robert Coli >>> > wrote:

> On Fri, Apr 18, 2014 at 5:05 AM, Phil Burress <
> philburress...@gmail.com> wrote:
>
>> nodetool netstats shows 84 files. They are all at 100%. Nothing
>> showing in Pending or Active for Read Repair Stats.
>>
>> I'm assuming this means it's done. But it still shows "JOINING".
>> Is there an undocumented step I'm missing here? This whole process 
>> seems
>> broken to me.
>>
>
> Lately it seems like a lot more people than usual are :
>
> 1) using vnodes
> 2) unable to bootstrap new nodes
>
> If I were you, I would likely file a JIRA detailing your negative
> experience with this core functionality.
>
> =Rob
>
>
>



 --
 Steve Robenalt
 Software Architect
  HighWire | Stanford University
 425 Broadway St, Redwood City, CA 94063

 srobe...@stanford.edu
 http://highwire.stanford.edu






>>>
>>
>>
>> --
>> Steve Robenalt
>> So

Re: Recommended Approach for Config Changes

2014-04-25 Thread Phil Burress
Thanks. I made a change to a single node and it took almost an hour to
rejoin the cluster (go from DN to UP in nodetool status). The cluster is
pretty much idle right now and has a very small dataset. Is that normal?


On Fri, Apr 25, 2014 at 10:08 AM, Chris Lohfink wrote:

> Yes.
>
> Some changes you can manually have take affect without a restart (ie
> compactionthroughput, things settable from jmx).  There is also config
> changes you cant really make like switching the snitch and such without a
> big todo.
>
> ---
> Chris
>
> On Apr 25, 2014, at 8:53 AM, Phil Burress 
> wrote:
>
> > If I wanted to make a configuration change to a single node in a
> cluster, what is the recommended approach for doing that? Is it ok to just
> stop that instance, make the change and then restart it?
>
>


Re: Recommended Approach for Config Changes

2014-04-25 Thread Tyler Hobbs
On Fri, Apr 25, 2014 at 10:43 AM, Phil Burress wrote:

> Thanks. I made a change to a single node and it took almost an hour to
> rejoin the cluster (go from DN to UP in nodetool status). The cluster is
> pretty much idle right now and has a very small dataset. Is that normal?


Not unless it had to replay a lot of commitlogs on startup.  If you look at
your logs and see that that's the case, you may want to run 'nodetool
drain' before stopping the node.


-- 
Tyler Hobbs
DataStax 


Re: Recommended Approach for Config Changes

2014-04-25 Thread Jon Haddad
You might want to take a peek at what’s happening in the process via strace -p 
or tcpdump.  I can’t remember ever waiting an hour for a node to rejoin.


On Apr 25, 2014, at 8:59 AM, Tyler Hobbs  wrote:

> 
> On Fri, Apr 25, 2014 at 10:43 AM, Phil Burress  
> wrote:
> Thanks. I made a change to a single node and it took almost an hour to rejoin 
> the cluster (go from DN to UP in nodetool status). The cluster is pretty much 
> idle right now and has a very small dataset. Is that normal?
> 
> Not unless it had to replay a lot of commitlogs on startup.  If you look at 
> your logs and see that that's the case, you may want to run 'nodetool drain' 
> before stopping the node.
> 
> 
> -- 
> Tyler Hobbs
> DataStax



signature.asc
Description: Message signed with OpenPGP using GPGMail


Hadoop, CqlInputFormat, datastax java driver and uppercase in Keyspace names

2014-04-25 Thread Maxime Nay
Hi,

We have a keyspace starting with an upper-case character: Visitors.
We are trying to run a map reduce job on one of the column family of this
keyspace.

To specify the keyspace it seems we have to use:
org.apache.cassandra.hadoop.
ConfigHelper.setInputColumnFamily(conf, keyspace, columnFamily);


If we do:
ConfigHelper.setInputColumnFamily(conf, "Visitors", columnFamily); we get:

com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace
'visitors' does not exist
at
com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at
com.datastax.driver.core.SessionManager.setKeyspace(SessionManager.java:335)

...

And if we do:
ConfigHelper.setInputColumnFamily(conf, "\"Visitors\"", columnFamily); we
get:
Exception in thread "main" java.lang.RuntimeException:
InvalidRequestException(why:No such keyspace: "Visitors")
at
org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getRangeMap(AbstractColumnFamilyInputFormat.java:339)
at
org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:125)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:962)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:979)
...

This is working just fine if the keyspace is lowercase.
And it was working just fine with Cassandra 2.0.6. But with Cassandra
2.0.7, and the addition of datastax's java driver in the dependencies, I am
getting this error.

Any idea how I could fix this?

Thanks!
Maxime


Re: Select with filtering

2014-04-25 Thread Mikhail Mazursky
Hello Paco,

thanks for response.

IMHO this is a not implemented feature in such case. Instead of fetching
whole wide row using partitioning key and filtering it on client side this
can be done on server side. In my particular case this will be more optimal
than adding secondary index. Maybe in case if there is a secondary index
and the whole partitioning key specified C* can use some statistics about
data to determine which way to go to make query more selective.

What do core developers think? Should I fill an issue in Jira?



2014-04-25 15:23 GMT+06:00 Paco Trujillo :

> Hi Mikhail
>
>
>
> It is not a bug/not implemented feature and you are not doing nothing
> wrong.  As you can check on the documentation you can only filter on a key
> name or a column that has a secondary index created on it:
>
>
>
>
> http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/select_r.html
>
>
>
> *From:* Mikhail Mazursky [mailto:ash...@gmail.com]
> *Sent:* vrijdag 25 april 2014 11:01
> *To:* user@cassandra.apache.org
> *Subject:* Select with filtering
>
>
>
> Hello all,
>
> I have the following schema:
>
>
> CREATE TABLE my_table (
> a varchar,
> b varchar,
> c int,
> d varchar,
> e uuid,
> PRIMARY KEY ((a, b), c, d)
> )
>
> SELECT * FROM my_table WHERE a=? AND b=? AND e=? ALLOW FILTERING
>
>
>
> The query above gives me the following exception message:
>
>
> com.datastax.driver.core.exceptions.InvalidQueryException: No indexed
> columns present in by-columns clause with Equal operator
>
> SELECT * FROM my_table WHERE a=? AND b=?
>
> Works fine and I see not reason why original query should not be able to
> do such filtering.
>
> If I add a secondary index by "e" column, then such query will work, but I
> do not want to do that.
>
>
>
> Cassandra 2.0.5
>
> Driver 2.0.1
>
> Is that a bug/not implemented feature? Or maybe I'm doing something wrong?
>
>
>
> Kind regards,
>
> Mikhail.
>


Re: Select with filtering

2014-04-25 Thread Peter Lin

Other people have expressed an interest and there's existing jira ticket for 
this type if feature.

Unfortunately it hasn't gotten much traction and the tickets are basically dead

Sent from my iPhone

> On Apr 25, 2014, at 12:03 PM, Mikhail Mazursky  wrote:
> 
> Hello Paco,
> 
> thanks for response.
> 
> IMHO this is a not implemented feature in such case. Instead of fetching 
> whole wide row using partitioning key and filtering it on client side this 
> can be done on server side. In my particular case this will be more optimal 
> than adding secondary index. Maybe in case if there is a secondary index and 
> the whole partitioning key specified C* can use some statistics about data to 
> determine which way to go to make query more selective.
> 
> What do core developers think? Should I fill an issue in Jira?
> 
> 
> 
> 2014-04-25 15:23 GMT+06:00 Paco Trujillo :
>> Hi Mikhail
>> 
>>  
>> 
>> It is not a bug/not implemented feature and you are not doing nothing wrong. 
>>  As you can check on the documentation you can only filter on a key name or 
>> a column that has a secondary index created on it:
>> 
>>  
>> 
>> http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/select_r.html
>> 
>>  
>> 
>> From: Mikhail Mazursky [mailto:ash...@gmail.com] 
>> Sent: vrijdag 25 april 2014 11:01
>> To: user@cassandra.apache.org
>> Subject: Select with filtering
>> 
>>  
>> 
>> Hello all,
>> 
>> I have the following schema:
>> 
>> 
>> CREATE TABLE my_table (
>> a varchar,
>> b varchar,
>> c int,
>> d varchar,
>> e uuid,
>> PRIMARY KEY ((a, b), c, d)
>> )
>> 
>> SELECT * FROM my_table WHERE a=? AND b=? AND e=? ALLOW FILTERING
>> 
>>  
>> 
>> The query above gives me the following exception message:
>> 
>> 
>> com.datastax.driver.core.exceptions.InvalidQueryException: No indexed 
>> columns present in by-columns clause with Equal operator
>> 
>> SELECT * FROM my_table WHERE a=? AND b=?
>> 
>> Works fine and I see not reason why original query should not be able to do 
>> such filtering.
>> 
>> If I add a secondary index by "e" column, then such query will work, but I 
>> do not want to do that.
>> 
>>  
>> 
>> Cassandra 2.0.5
>> 
>> Driver 2.0.1
>> 
>> Is that a bug/not implemented feature? Or maybe I'm doing something wrong?
>> 
>>  
>> 
>> Kind regards,
>> 
>> Mikhail.
>> 
> 


: Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
This is what i am getting with Cassandra 2.0.7 with Thrift.


Caused by: org.apache.thrift.transport.TTransportException: Read a negative
frame size (-2113929216)!
at
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)

Any pointer/suggestions?

-Vivek


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Chris Lohfink
Did you send an enormous write or batch write and it wrapped?  Or is your 
client trying to use non-framed transport?

Chris

On Apr 25, 2014, at 2:50 PM, Vivek Mishra  wrote:

> This is what i am getting with Cassandra 2.0.7 with Thrift.
> 
> 
> Caused by: org.apache.thrift.transport.TTransportException: Read a negative 
> frame size (-2113929216)!
>   at 
> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
>   at 
> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
>   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
> 
> Any pointer/suggestions?
> 
> -Vivek



Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
It's a simple cql3 query to create keyspace.

-Vivek


On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink wrote:

> Did you send an enormous write or batch write and it wrapped?  Or is your
> client trying to use non-framed transport?
>
> Chris
>
> On Apr 25, 2014, at 2:50 PM, Vivek Mishra  wrote:
>
> > This is what i am getting with Cassandra 2.0.7 with Thrift.
> >
> >
> > Caused by: org.apache.thrift.transport.TTransportException: Read a
> negative frame size (-2113929216)!
> >   at
> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
> >   at
> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
> >   at
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >   at
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
> >   at
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
> >   at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
> >   at
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
> >
> > Any pointer/suggestions?
> >
> > -Vivek
>
>


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Chris Lohfink
what client are you using?

On Apr 25, 2014, at 3:01 PM, Vivek Mishra  wrote:

> It's a simple cql3 query to create keyspace.
> 
> -Vivek
> 
> 
> On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink  
> wrote:
> Did you send an enormous write or batch write and it wrapped?  Or is your 
> client trying to use non-framed transport?
> 
> Chris
> 
> On Apr 25, 2014, at 2:50 PM, Vivek Mishra  wrote:
> 
> > This is what i am getting with Cassandra 2.0.7 with Thrift.
> >
> >
> > Caused by: org.apache.thrift.transport.TTransportException: Read a negative 
> > frame size (-2113929216)!
> >   at 
> > org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
> >   at 
> > org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
> >   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >   at 
> > org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
> >   at 
> > org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
> >   at 
> > org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
> >   at 
> > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
> >
> > Any pointer/suggestions?
> >
> > -Vivek
> 
> 



Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
datastax java driver 2.0.1




On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink wrote:

> what client are you using?
>
> On Apr 25, 2014, at 3:01 PM, Vivek Mishra  wrote:
>
> It's a simple cql3 query to create keyspace.
>
> -Vivek
>
>
> On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink 
> wrote:
>
>> Did you send an enormous write or batch write and it wrapped?  Or is your
>> client trying to use non-framed transport?
>>
>> Chris
>>
>> On Apr 25, 2014, at 2:50 PM, Vivek Mishra  wrote:
>>
>> > This is what i am getting with Cassandra 2.0.7 with Thrift.
>> >
>> >
>> > Caused by: org.apache.thrift.transport.TTransportException: Read a
>> negative frame size (-2113929216)!
>> >   at
>> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
>> >   at
>> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
>> >   at
>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>> >   at
>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
>> >   at
>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
>> >   at
>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
>> >   at
>> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>> >
>> > Any pointer/suggestions?
>> >
>> > -Vivek
>>
>>
>
>


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Alex Popescu
Can you share the relevant code snippet that leads to this exception?


On Fri, Apr 25, 2014 at 4:47 PM, Vivek Mishra  wrote:

> datastax java driver 2.0.1
>
>
>
>
> On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink 
> wrote:
>
>> what client are you using?
>>
>> On Apr 25, 2014, at 3:01 PM, Vivek Mishra  wrote:
>>
>> It's a simple cql3 query to create keyspace.
>>
>> -Vivek
>>
>>
>> On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink 
>> wrote:
>>
>>> Did you send an enormous write or batch write and it wrapped?  Or is
>>> your client trying to use non-framed transport?
>>>
>>> Chris
>>>
>>> On Apr 25, 2014, at 2:50 PM, Vivek Mishra  wrote:
>>>
>>> > This is what i am getting with Cassandra 2.0.7 with Thrift.
>>> >
>>> >
>>> > Caused by: org.apache.thrift.transport.TTransportException: Read a
>>> negative frame size (-2113929216)!
>>> >   at
>>> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
>>> >   at
>>> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
>>> >   at
>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>> >   at
>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
>>> >   at
>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
>>> >   at
>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
>>> >   at
>>> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>>> >
>>> > Any pointer/suggestions?
>>> >
>>> > -Vivek
>>>
>>>
>>
>>
>


-- 

:- a)


Alex Popescu
Sen. Product Manager @ DataStax
@al3xandru


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Benedict Elliott Smith
Vivek,

The error you are seeing is a thrift error, but you say you are using the
Java driver which does not operate over thrift: are you perhaps trying to
connect the datastax driver to the thrift protocol port? The two protocols
are not compatible, you must connect to the native_transport_port (by
default 9042)


On 26 April 2014 00:47, Vivek Mishra  wrote:

> datastax java driver 2.0.1
>
>
>
>
> On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink 
> wrote:
>
>> what client are you using?
>>
>> On Apr 25, 2014, at 3:01 PM, Vivek Mishra  wrote:
>>
>> It's a simple cql3 query to create keyspace.
>>
>> -Vivek
>>
>>
>> On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink 
>> wrote:
>>
>>> Did you send an enormous write or batch write and it wrapped?  Or is
>>> your client trying to use non-framed transport?
>>>
>>> Chris
>>>
>>> On Apr 25, 2014, at 2:50 PM, Vivek Mishra  wrote:
>>>
>>> > This is what i am getting with Cassandra 2.0.7 with Thrift.
>>> >
>>> >
>>> > Caused by: org.apache.thrift.transport.TTransportException: Read a
>>> negative frame size (-2113929216)!
>>> >   at
>>> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
>>> >   at
>>> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
>>> >   at
>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>> >   at
>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
>>> >   at
>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
>>> >   at
>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
>>> >   at
>>> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>>> >
>>> > Any pointer/suggestions?
>>> >
>>> > -Vivek
>>>
>>>
>>
>>
>


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
TSocket socket = new TSocket(host, Integer.parseInt(port));
TTransport transport = new TFramedTransport(socket);
TProtocol protocol = new TBinaryProtocol(transport, true, true);
cassandra_client = new Cassandra.Client(protocol);


cassandra_client.execute_cql3_query(

ByteBuffer.wrap(queryBuilder.toString().getBytes(Constants.CHARSET_UTF8)),
Compression.NONE,
ConsistencyLevel.ONE);



On Sat, Apr 26, 2014 at 5:19 AM, Alex Popescu  wrote:

> Can you share the relevant code snippet that leads to this exception?
>
>
> On Fri, Apr 25, 2014 at 4:47 PM, Vivek Mishra wrote:
>
>> datastax java driver 2.0.1
>>
>>
>>
>>
>> On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink 
>> wrote:
>>
>>> what client are you using?
>>>
>>> On Apr 25, 2014, at 3:01 PM, Vivek Mishra  wrote:
>>>
>>> It's a simple cql3 query to create keyspace.
>>>
>>> -Vivek
>>>
>>>
>>> On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink >> > wrote:
>>>
 Did you send an enormous write or batch write and it wrapped?  Or is
 your client trying to use non-framed transport?

 Chris

 On Apr 25, 2014, at 2:50 PM, Vivek Mishra 
 wrote:

 > This is what i am getting with Cassandra 2.0.7 with Thrift.
 >
 >
 > Caused by: org.apache.thrift.transport.TTransportException: Read a
 negative frame size (-2113929216)!
 >   at
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
 >   at
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 >   at
 org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 >   at
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 >   at
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 >   at
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 >   at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 >
 > Any pointer/suggestions?
 >
 > -Vivek


>>>
>>>
>>
>
>
> --
>
> :- a)
>
>
> Alex Popescu
> Sen. Product Manager @ DataStax
> @al3xandru
>


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
Yes i know. But i am not sure why is it failing, simply having Thrift jar
and cassandra-thrift in classpath doesn't fails. But as soon as i get
datastax one in classpath, it started failing. Point is even if i am having
both in classpath, switching b/w thrift and Datastax should work.

-Vivek


On Sat, Apr 26, 2014 at 5:36 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:

> Vivek,
>
> The error you are seeing is a thrift error, but you say you are using the
> Java driver which does not operate over thrift: are you perhaps trying to
> connect the datastax driver to the thrift protocol port? The two protocols
> are not compatible, you must connect to the native_transport_port (by
> default 9042)
>
>
> On 26 April 2014 00:47, Vivek Mishra  wrote:
>
>> datastax java driver 2.0.1
>>
>>
>>
>>
>> On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink 
>> wrote:
>>
>>> what client are you using?
>>>
>>> On Apr 25, 2014, at 3:01 PM, Vivek Mishra  wrote:
>>>
>>> It's a simple cql3 query to create keyspace.
>>>
>>> -Vivek
>>>
>>>
>>> On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink >> > wrote:
>>>
 Did you send an enormous write or batch write and it wrapped?  Or is
 your client trying to use non-framed transport?

 Chris

 On Apr 25, 2014, at 2:50 PM, Vivek Mishra 
 wrote:

 > This is what i am getting with Cassandra 2.0.7 with Thrift.
 >
 >
 > Caused by: org.apache.thrift.transport.TTransportException: Read a
 negative frame size (-2113929216)!
 >   at
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
 >   at
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 >   at
 org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 >   at
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 >   at
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 >   at
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 >   at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 >
 > Any pointer/suggestions?
 >
 > -Vivek


>>>
>>>
>>
>


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
Just to add, it works fine with Cassandra 1.x and Datastax 1.x

-Vivek


On Sat, Apr 26, 2014 at 10:02 AM, Vivek Mishra wrote:

> Yes i know. But i am not sure why is it failing, simply having Thrift jar
> and cassandra-thrift in classpath doesn't fails. But as soon as i get
> datastax one in classpath, it started failing. Point is even if i am having
> both in classpath, switching b/w thrift and Datastax should work.
>
> -Vivek
>
>
> On Sat, Apr 26, 2014 at 5:36 AM, Benedict Elliott Smith <
> belliottsm...@datastax.com> wrote:
>
>> Vivek,
>>
>> The error you are seeing is a thrift error, but you say you are using the
>> Java driver which does not operate over thrift: are you perhaps trying to
>> connect the datastax driver to the thrift protocol port? The two protocols
>> are not compatible, you must connect to the native_transport_port (by
>> default 9042)
>>
>>
>> On 26 April 2014 00:47, Vivek Mishra  wrote:
>>
>>> datastax java driver 2.0.1
>>>
>>>
>>>
>>>
>>> On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink >> > wrote:
>>>
 what client are you using?

 On Apr 25, 2014, at 3:01 PM, Vivek Mishra 
 wrote:

 It's a simple cql3 query to create keyspace.

 -Vivek


 On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink <
 clohf...@blackbirdit.com> wrote:

> Did you send an enormous write or batch write and it wrapped?  Or is
> your client trying to use non-framed transport?
>
> Chris
>
> On Apr 25, 2014, at 2:50 PM, Vivek Mishra 
> wrote:
>
> > This is what i am getting with Cassandra 2.0.7 with Thrift.
> >
> >
> > Caused by: org.apache.thrift.transport.TTransportException: Read a
> negative frame size (-2113929216)!
> >   at
> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
> >   at
> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
> >   at
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >   at
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
> >   at
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
> >   at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
> >   at
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
> >
> > Any pointer/suggestions?
> >
> > -Vivek
>
>


>>>
>>
>