Thrift frame settings not being picked up

2012-11-07 Thread Dean Pullen

Hi all,

I'm getting some frame size issues, i.e. similar to:

11:16:06.456 WARN  m.p.c.connection.HConnectionManager - Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: 
org.apache.thrift.transport.TTransportException: Frame size (19822670) 
larger than max length (16384000)!
at 
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:33) 
~[hector-core-1.0-5.jar:na]



However, I've updated the cassandra.yaml to read:
thrift_framed_transport_size_in_mb: 25
thrift_max_message_length_in_mb: 30


I'm not sure why it's still stuck at 16mb. I've obviously restarted the 
node etc, and even cleared out any pre-existing data,



Info - tried updating everything to latest:

cassandra 1.1.6
thrift 0.9.0
hector 1.0-5


Many thanks,

Dean


Re: Strange delay in query

2012-11-07 Thread André Cruz
On Nov 7, 2012, at 2:12 AM, Chuan-Heng Hsiao  wrote:

> I assume you are using cassandra-cli and connecting to some specific node.
> 
> You can check the following steps:
> 
> 1. Can you still reproduce this issue? (not -> maybe the system/node issue)

Yes. I can reproduce this issue on all 3 nodes. Also, I have a replication 
factor of 3.


> 2. What's the result when query without limit?


This row has 600k columns. I issued a count, and after some 10s:

[disco@Disco] count NamespaceRevision[3cd88d97-ffde-44ca-8ae9-5336caaebc4e];
609054 columns


> 3. What's the result after doing nodetool repair -pr on that
> particular column family and that node?

I already issued a "nodetool repair" on all nodes, nothing changed. Would your 
command be any different?


> btw, there seems to be some minor bug in the 1.1.5 cassandra-cli (but
> not in 1.1.6).

This error also happens on my application that uses pycassa, so I don't think 
this is the same bug.


Thanks for the help!

André

Re: Questions around the heap

2012-11-07 Thread Bryan
What are your bloom filter settings on your CFs? Maybe look here: 
http://www.datastax.com/docs/1.1/operations/tuning#tuning-bloomfilters



On Nov 7, 2012, at 4:56 AM, Alain RODRIGUEZ wrote:

> Hi,
> 
> We just had some issue in production that we finally solve upgrading hardware 
> and increasing the heap.
> 
> Now we have 3 xLarge servers from AWS (15G RAM, 4 cpu - 8 cores). We add them 
> and then removed the old ones.
> 
> With full default configuration, 0.75 threshold of 4G was being reach 
> continuously, so I was obliged to increase the heap to 8G:
> 
> Memtable  : 2G (Manually configured)
> Key cache : 0.1G (min(5% of Heap (in MB), 100MB))
> System : 1G (more or less, from datastax doc)
> 
> It should use about 3 G and it actually use between 4 and 6 G.
> 
> So here are my questions:
> 
> How can we know how the heap is being used, monitor it ?
> Why have I that much memory used in the heap of my new servers ?
> 
> All configurations not specified are default from 1.1.2 Cassandra.
> 
> Here is what happen to us before, why we change our hardware, if you have any 
> clue on what happen we would be glad to learn and maybe come back to our old 
> hardware.
> 
>  User experience 
> 
> 
> We had a Cassandra 1.1.2 2 nodes cluster with RF2 and CL.ONE (R&W) running on 
> 2 m1.Large aws (7.5G RAM, 2 cpu - 4 cores dedicated to Cassandra only). 
> 
> Cassandra.yaml was configured with 1.1.2 default options and in 
> cassandra-env.sh I configured a 4G heap with a 200M "new size".
> 
> That is the heap that was supposed to be used.
> 
> Memtable  : 1.4G (1/3 of the heap)
> Key cache : 0.1G (min(5% of Heap (in MB), 100MB))
> System : 1G (more or less, from datastax doc)
> 
> So we are around 2.5G max in theory out of 3G usable (threshold 0.75 of the 
> heap before flushing memtable because of pressure)
> 
> I thought it was ok regarding Datastax documentation:
> 
> "Regardless of how much RAM your hardware has, you should keep the JVM heap 
> size constrained by the following formula and allow the operating system’s 
> file cache to do the rest:
> (memtable_total_space_in_mb) + 1GB + (cache_size_estimate)"
> 
> After adding a third node and changing the RF from 2 to 3 (to allow using 
> CL.QUORUM and still be able to restart a node whenever we want), things went 
> really bad. Even if I still don't get how any of these operations could 
> possibly affect the heap needed.
> 
> All the 3 nodes reached the 0.75 heap threshold (I tried to increase it to 
> 0.85, but it was still reached). And they never came down. So my cluster 
> started flushing a lot and the load increased because of unceasing 
> compactions. This unexpected load produced latency that broke down our 
> service for a while. Even with the service down, Cassandra was unable to 
> recover.
> 



documentation on PlayOrm released

2012-11-07 Thread Hiller, Dean
The first set of documentation on PlayOrm is now released.  It is also still 
growing as we have a dedicated person working on more documentation.  Check it 
out when you have a chance.

Later,
Dean


Re: logging servers? any interesting in one for cassandra?

2012-11-07 Thread Brian O'Neill

Thanks Dean.  We'll definitely take a look.  (probably in January)

-brian

---
Brian O'Neill
Lead Architect, Software Development
Health Market Science
The Science of Better Results
2700 Horizon Drive • King of Prussia, PA • 19406
M: 215.588.6024 • @boneill42   •
healthmarketscience.com

This information transmitted in this email message is for the intended
recipient only and may contain confidential and/or privileged material. If
you received this email in error and are not the intended recipient, or
the person responsible to deliver it to the intended recipient, please
contact the sender at the email above and delete this email and any
attachments and destroy any copies thereof. Any review, retransmission,
dissemination, copying or other use of, or taking any action in reliance
upon, this information by persons or entities other than the intended
recipient is strictly prohibited.
 






On 11/6/12 11:19 AM, "Hiller, Dean"  wrote:

>Sure, in our playing around, we have an awesome log back configuration for
>development time only that shows warning, severe in red in eclipse and
>let's you click on every single log taking you right to the code that
>logged it…(thought you might enjoy it)...
>
>https://github.com/deanhiller/playorm/blob/master/input/javasrc/logback.xm
>l
>
>
>The java appender is here(called CassandraAppender)
>https://github.com/deanhiller/playorm/tree/master/input/javasrc/com/alvaza
>n
>/play/logging
>
>
>The AsyncAppender there is different then log backs in that it allows
>bursting but once reaches the limit, it essentially becomes synchronous
>again which allows us to not drop logs like log backs and allow for bursts
>of performance
>
>The CircularBufferAppender is an inmemory buffer that flushes all logs X
>level and above to child appender when a warning or severe happens where X
>is configurable.  
>
>We have only tested out the CassandraAppender at this point.  Right now
>you have to call CassandraAppender.setFactory to set the
>NoSqlEntityManager factory to set it.  It creates a LogEvent rows as well
>as an index on the session and partitions by the first two characters of
>the web session id so there is an index per partition.  This allows us to
>the look at a single web session of a user.  The only thing I don't like
>is we have to do a read when updating the index to be able to delete old
>values in the index(ick), but I couldn't figure any other way around that.
>
>Also, if you have high event rates, there is a MDCLevelFilter so you can
>tag the MDC with something like user=__program__ and ignore all logs for
>him unless they are warning logs which we use to limit the logs from just
>being huge.
>
>Later,
>Dean
>
>
>On 11/6/12 6:32 AM, "Brian O'Neill"  wrote:
>
>>Nice DeanŠ
>>
>>I'm not so sure we would run the server, but we'd definitely be
>>interested
>>in the logback adaptor.
>>(We would then just access the data via Virgil (over REST), with a thin
>>javascript UI)
>>
>>Let me/us know if you end up putting it out there.  We intend centralize
>>logging sometime over the next few months.
>>
>>-brian
>>
>>---
>>Brian O'Neill
>>Lead Architect, Software Development
>>Health Market Science
>>The Science of Better Results
>>2700 Horizon Drive € King of Prussia, PA € 19406
>>M: 215.588.6024 € @boneill42   €
>>healthmarketscience.com
>>
>>This information transmitted in this email message is for the intended
>>recipient only and may contain confidential and/or privileged material.
>>If
>>you received this email in error and are not the intended recipient, or
>>the person responsible to deliver it to the intended recipient, please
>>contact the sender at the email above and delete this email and any
>>attachments and destroy any copies thereof. Any review, retransmission,
>>dissemination, copying or other use of, or taking any action in reliance
>>upon, this information by persons or entities other than the intended
>>recipient is strictly prohibited.
>> 
>>
>>
>>
>>
>>
>>
>>On 11/1/12 10:33 AM, "Hiller, Dean"  wrote:
>>
>>>2 questions
>>>
>>> 1.  What are people using for logging servers for their web tier
>>>logging?
>>> 2.  Would anyone be interested in a new logging server(any programming
>>>language) for web tier to log to your existing cassandra(it uses up disk
>>>space in proportion to number of web servers and just has a rolling
>>>window of logs along with a window of threshold dumps)?
>>>
>>>Context for second question: I like less systems since it is less
>>>maintenance/operations cost and so yesterday I quickly wrote up some log
>>>back appenders which support (SLF4J/log4j/jdk/commons libraries) and
>>>send
>>>the logs from our client tier into cassandra.  It is simply a rolling
>>>window of logs so the space used in cassandra is proportional to the
>>>amount of web  servers I have(currently, I have 4 web servers).  I am
>>>also thinking about adding warning type logging such that on warning,
>>>the
>>

can't start cqlsh on new Amazon node

2012-11-07 Thread Tamar Fraenkel
Hi!
I installed new cluster using DataStax AMI with --release 1.0.11, so I have
cassandra 1.0.11 installed.
Nodes have python-cql 1.0.10-1 and python2.6

Cluster works well, BUT when I try to connect to the cqlsh I get:
*cqlsh --debug --cqlversion=2 localhost 9160*
Using CQL driver: 
Using thrift lib: 
Connection error: Invalid method name: 'set_cql_version'
*
*This is the same if I chose cqlversion=3*

*Any idea how to solve?*

*Thanks,*

Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
<>

Re: Questions around the heap

2012-11-07 Thread Alain RODRIGUEZ
I have to say that I have no idea on how to tune them.

I discover the existence of bloom filters a few month ago and even after
reading http://wiki.apache.org/cassandra/ArchitectureOverview#line-132 and
http://spyced.blogspot.com/2009/01/all-you-ever-wanted-to-know-about.html I
am not sure what would be the impacts (positives and negatives) of tuning
the bloom filters.

>From my reads I understand that with a bloom_filter_fp_chance > 0 I
introduce a chance to get a false positive from a SSTable inducing
eventually more latency while answering queries but using less memory. Is
that right ?

"What are your bloom filter settings on your CFs?"

They are default (0 - which seems to mean fully enabled
http://www.datastax.com/docs/1.1/configuration/storage_configuration#bloom-filter-fp-chance
)

Cant they grow indefinitely or is there a threshold?

Is there a way to "explore" the heap to be sure that bloom filters are
causing this intensive use of the memory inside the heap before tuning them?

>From http://www.datastax.com/docs/1.1/operations/tuning#tuning-bloomfilters
 :

"For example, to run an analytics application that heavily scans a
particular column family, you would want to inhibit or disable the Bloom
filter on the column family by setting it high"

Why would I do that, won't it slow the display of analytics?

Alain


2012/11/7 Bryan 

> What are your bloom filter settings on your CFs? Maybe look here:
> http://www.datastax.com/docs/1.1/operations/tuning#tuning-bloomfilters
>
>
>
> On Nov 7, 2012, at 4:56 AM, Alain RODRIGUEZ wrote:
>
> Hi,
>
> We just had some issue in production that we finally solve upgrading
> hardware and increasing the heap.
>
> Now we have 3 xLarge servers from AWS (15G RAM, 4 cpu - 8 cores). We add
> them and then removed the old ones.
>
> With full default configuration, 0.75 threshold of 4G was being reach
> continuously, so I was obliged to increase the heap to 8G:
>
> Memtable  : 2G (Manually configured)
> Key cache : 0.1G (min(5% of Heap (in MB), 100MB))
> System : 1G (more or less, from datastax doc)
>
> It should use about 3 G and it actually use between 4 and 6 G.
>
> So here are my questions:
>
> How can we know how the heap is being used, monitor it ?
> Why have I that much memory used in the heap of my new servers ?
>
> All configurations not specified are default from 1.1.2 Cassandra.
>
> Here is what happen to us before, why we change our hardware, if you have
> any clue on what happen we would be glad to learn and maybe come back to
> our old hardware.
>
>  User experience
> 
>
> We had a Cassandra 1.1.2 2 nodes cluster with RF2 and CL.ONE (R&W) running
> on 2 m1.Large aws (7.5G RAM, 2 cpu - 4 cores dedicated to Cassandra only).
>
> Cassandra.yaml was configured with 1.1.2 default options and in
> cassandra-env.sh I configured a 4G heap with a 200M "new size".
>
> That is the heap that was supposed to be used.
>
> Memtable  : 1.4G (1/3 of the heap)
> Key cache : 0.1G (min(5% of Heap (in MB), 100MB))
> System : 1G (more or less, from datastax doc)
>
> So we are around 2.5G max in theory out of 3G usable (threshold 0.75 of
> the heap before flushing memtable because of pressure)
>
> I thought it was ok regarding Datastax documentation:
>
> "Regardless of how much RAM your hardware has, you should keep the JVM
> heap size constrained by the following formula and allow the operating
> system’s file cache to do the rest:
>
> (memtable_total_space_in_mb) + 1GB + (cache_size_estimate)"
> After adding a third node and changing the RF from 2 to 3 (to allow using
> CL.QUORUM and still be able to restart a node whenever we want), things
> went really bad. Even if I still don't get how any of these operations
> could possibly affect the heap needed.
>
> All the 3 nodes reached the 0.75 heap threshold (I tried to increase it to
> 0.85, but it was still reached). And they never came down. So my cluster
> started flushing a lot and the load increased because of
> unceasing compactions. This unexpected load produced latency that broke
> down our service for a while. Even with the service down, Cassandra was
> unable to recover.
>
>
>


RE: documentation on PlayOrm released

2012-11-07 Thread Huang, Roger
Dean,
What's the URL?
-Roger


-Original Message-
From: Hiller, Dean [mailto:dean.hil...@nrel.gov] 
Sent: Wednesday, November 07, 2012 7:43 AM
To: user@cassandra.apache.org
Subject: documentation on PlayOrm released

The first set of documentation on PlayOrm is now released.  It is also still 
growing as we have a dedicated person working on more documentation.  Check it 
out when you have a chance.

Later,
Dean


Re: documentation on PlayOrm released

2012-11-07 Thread Hiller, Dean
My bad.  It is on the github PlayOrm wiki.  The specific link is

https://github.com/deanhiller/playorm/wiki


Later,
Dean



Re: Questions around the heap

2012-11-07 Thread Hiller, Dean
+1, I am interested in this answer as well.

From: Alain RODRIGUEZ mailto:arodr...@gmail.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Wednesday, November 7, 2012 9:45 AM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Re: Questions around the heap

s application that heavily scans a particular column family, you would want to 
inhibit or disable the Bloom filter on the column family by setting it high"


Define an MBean using the Collectd JMX plugin

2012-11-07 Thread Eugen Paraschiv
Hi,
I'm in the process of monitoring Cassandra via CollectD, and I'm running
into some problems with a particular MBean definition in collectd:


ObjectName "org.apache.cassandra.concurrent:type=ROW-READ-STAGE"
InstancePrefix "cassandra_row_read_stage"

Type "cassandra_stage"
Attribute "ActiveCount"
Attribute "PendingTasks"
Attribute "CompletedTasks"


The problem is:
*GenericJMXConfMBean: No MBean matched the ObjectName
org.apache.cassandra.concurrent:type=ROW-READ-STAGE*

Is there anything that jumps out with that definition?
Any help is appreciated.
Thanks.
Eugen.

-- 
Eugen Paraschiv
Senior Java Programmer, Optaros
Mobile: +40728896170
Blog: www.baeldung.com
Twitter: https://twitter.com/baeldung


Re: Service killed by signal 9

2012-11-07 Thread Tristan Seligmann
On Wed, Nov 7, 2012 at 9:46 PM, Marcelo Elias Del Valle
 wrote:
> Service killed by signal 9

Signal 9 is SIGKILL. Assuming that you're not killing the process
yourself, I guess the most likely cause of this is the OOM killer. If
you check /var/log/kern.log or dmesg you should see a message
confirming this.
-- 
mithrandi, i Ainil en-Balandor, a faer Ambar


Re: Service killed by signal 9

2012-11-07 Thread Marcelo Elias Del Valle
Yes, indeed:
Nov  7 20:02:44 ip-10-243-15-139 kernel: [ 4992.839419] Out of memory: Kill
process 14183 (jsvc) score 914 or sacrifice child
Nov  7 20:02:44 ip-10-243-15-139 kernel: [ 4992.839439] Killed process
14183 (jsvc) total-vm:1181220kB, anon-rss:539164kB, file-rss:12180kB

Thanks a lot, at least I can evolute now! :D


2012/11/7 Tristan Seligmann 

> On Wed, Nov 7, 2012 at 9:46 PM, Marcelo Elias Del Valle
>  wrote:
> > Service killed by signal 9
>
> Signal 9 is SIGKILL. Assuming that you're not killing the process
> yourself, I guess the most likely cause of this is the OOM killer. If
> you check /var/log/kern.log or dmesg you should see a message
> confirming this.
> --
> mithrandi, i Ainil en-Balandor, a faer Ambar
>



-- 
Marcelo Elias Del Valle
http://mvalle.com - @mvallebr


composite column validation_class question

2012-11-07 Thread Wei Zhu
Hi All,
I am trying to design my schema using composite column. One thing I am a bit 
confused is how to define validation_class for the composite column, or is 
there a way to define it?
for the composite column, I might insert different value based on the column 
name, for example
I will insert date for column "created": 

set user[1]['7:1:100:created'] = 1351728000; 

and insert String for description

set user[1]['7:1:100:desc'] = my description; 

I don't see a way to define validation_class for composite column. Am I right?

Thanks.
-Wei

problem encrypting keys and data

2012-11-07 Thread Brian Tarbox
We have a requirement to store our data encrypted.
Our encryption system turns our various strings into byte arrays.  So far
so good.

The problem is that the bytes in our byte arrays are sometimes
negative...but when we look at them in the cassandra-cli (or try
to programatically retrieve them) the bytes are all positive so we of
course don't find the expected data.

We have tried Byte encoding and UTF8 encoding without luck.  In looking at
the Byte validator in particular I see nothing that ought to care about the
sign of the bytes, but perhaps I'm missing something.

Any suggestions would be appreciated, thanks.

Brian Tarbox


Re: problem encrypting keys and data

2012-11-07 Thread Andrey Ilinykh
Honestly, I don't understand what encoding you are talking about. Just
write/read data as a byte array. You will read back exactly you write.

Thank you,
  Andrey


On Wed, Nov 7, 2012 at 1:43 PM, Brian Tarbox wrote:

> We have a requirement to store our data encrypted.
> Our encryption system turns our various strings into byte arrays.  So far
> so good.
>
> The problem is that the bytes in our byte arrays are sometimes
> negative...but when we look at them in the cassandra-cli (or try
> to programatically retrieve them) the bytes are all positive so we of
> course don't find the expected data.
>
> We have tried Byte encoding and UTF8 encoding without luck.  In looking at
> the Byte validator in particular I see nothing that ought to care about the
> sign of the bytes, but perhaps I'm missing something.
>
> Any suggestions would be appreciated, thanks.
>
> Brian Tarbox
>


Re: How to replace a dead *seed* node while keeping quorum

2012-11-07 Thread Ron Siemens

I have an update on this.  I witnessed this same split ring problem, this time 
while doing a rolling upgrade from 1.1.4 to 1.1.6.  I found an easier 
workaround than modifying configs and restarting.  I found that by explicitly 
specifying the same token on the commandline using "-Dcassandra.replace_token=" 
when bringing up the new node, this problem wasn't exhibited.  Everything 
worked smoothly.

Ron

On Oct 10, 2012, at 12:38 PM, Ron Siemens wrote:

> 
> I witnessed the same behavior as reported by Edward and James.
> 
> Removing the host from its own seed list does not solve the problem.  
> Removing it from config of all nodes and restarting each, then restarting the 
> failed node worked.
> 
> Ron
> 
> On Sep 12, 2012, at 4:42 PM, Edward Sargisson wrote:
> 
>> I'm reposting my colleague's reply to Rob to the list (with James' 
>> permission) in case others are interested.
>> 
>> I'll add to James' post below to say I don't believe we saw the message that 
>> that slice of code would have printed.
>> 
>> "
>> Hey Rob,
>> 
>> Ed's AWOL right now and I'm not on u@c.a.o, but I can tell you that when 
>> I removed the downed seed node from its own list of seed nodes in 
>> cassandra.yaml that it didn't join the existing ring nor did it get any 
>> schemas or data from the existing ring; it felt like timeouts were 
>> happening. (IANA Cassandra wizard, so excuse my terminology impedance.)
>> 
>> Changing the machine's hostname and giving it a new IP, it behaved as 
>> expected; joining the ring, syncing both schema and associated data.
>> 
>> Downed node is 1.1.4, the rest of the ring is 1.1.2.
>> 
>> I'm in a situation where I can revert the IP/hostname change and retry 
>> the scenario as needed if you've got any ideas.
>> 
>> HTH,
>> 
>>JAmes"
>> 
>> Cheers,
>> Edward
>> 
>> On 12-09-12 03:53 PM, Rob Coli wrote:
>>> On Tue, Sep 11, 2012 at 4:21 PM, Edward Sargisson
>>>  wrote:
 If the downed node is a seed node then neither of the replace a dead node
 procedures work (-Dcassandra.replace_token and taking initial_token-1). The
 ring remains split.
 [...]
 In other words, if the host name is on the seeds list then it appears that
 the rest of the ring refuses to bootstrap it.
>>> Close, but not exactly...
>>> 
>>> "./src/java/org/apache/cassandra/service/StorageService.java" line 559 of 
>>> 3090
>>> "
>>> if (DatabaseDescriptor.isAutoBootstrap()
>>> &&
>>> DatabaseDescriptor.getSeeds().contains(FBUtilities.getBroadcastAddress())
>>> && !SystemTable.isBootstrapped())
>>> logger_.info("This node will not auto bootstrap because it
>>> is configured to be a seed node.");
>>> "
>>> 
>>> getSeeds asks your seed provider for a list of seeds. If you are using
>>> the SimpleSeedProvider, this basically turns the list from "seeds" in
>>> cassandra.yaml on the local node into a list of hosts.
>>> 
>>> So it isn't that the other nodes have this node in their seed list..
>>> it's that the node you are replacing has itself in its own seed list,
>>> and shouldn't. I understand that it can be tricky in conf management
>>> tools to make seed nodes' seed lists not contain themselves, but I
>>> believe it is currently necessary in this case.
>>> 
>>> FWIW, it's unclear to me (and Aaron Morton, whose curiousity was
>>> apparently equally piqued and is looking into it further..) why
>>> exactly seed nodes shouldn't bootstrap. It's possible that they only
>>> shouldn't bootstrap without being in "hibernate" mode, and that the
>>> code just hasn't been re-written post replace_token/hibernate to say
>>> that it's ok for seed nodes to bootstrap as long as they hibernate...
>>> 
>>> =Rob
>>> 
>> 
>> -- 
>> Edward Sargisson
>> senior java developer
>> Global Relay
>> 
>> edward.sargis...@globalrelay.net
>> 
>> 
>> 866.484.6630 
>> New York | Chicago | Vancouver  |  London  (+44.0800.032.9829)  |  Singapore 
>>  (+65.3158.1301)
>> 
>> Global Relay Archive supports email, instant messaging, BlackBerry, 
>> Bloomberg, Thomson Reuters, Pivot, YellowJacket, LinkedIn, Twitter, Facebook 
>> and more. 
>> 
>> Ask about Global Relay Message — The Future of Collaboration in the 
>> Financial Services World
>> 
>> All email sent to or from this address will be retained by Global Relay’s 
>> email archiving system. This message is intended only for the use of the 
>> individual or entity to which it is addressed, and may contain information 
>> that is privileged, confidential, and exempt from disclosure under 
>> applicable law.  Global Relay will not be liable for any compliance or 
>> technical information provided herein.  All trademarks are the property of 
>> their respective owners.
> 



Re: problem encrypting keys and data

2012-11-07 Thread Hiller, Dean
Are you encountering the java issue of java not having unsigned bytes at all??? 
 If so, you should use int so that you can process an unsigned byte.  Anyways, 
just a thought.

Dean

From: Brian Tarbox mailto:tar...@cabotresearch.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Wednesday, November 7, 2012 3:43 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: problem encrypting keys and data

We have a requirement to store our data encrypted.
Our encryption system turns our various strings into byte arrays.  So far so 
good.

The problem is that the bytes in our byte arrays are sometimes negative...but 
when we look at them in the cassandra-cli (or try to programatically retrieve 
them) the bytes are all positive so we of course don't find the expected data.

We have tried Byte encoding and UTF8 encoding without luck.  In looking at the 
Byte validator in particular I see nothing that ought to care about the sign of 
the bytes, but perhaps I'm missing something.

Any suggestions would be appreciated, thanks.

Brian Tarbox


Re: can't start cqlsh on new Amazon node

2012-11-07 Thread Jason Wee
should it be --cql3 ?
http://www.datastax.com/docs/1.1/dml/using_cql#start-cql3


On Wed, Nov 7, 2012 at 11:16 PM, Tamar Fraenkel  wrote:

> Hi!
> I installed new cluster using DataStax AMI with --release 1.0.11, so I
> have cassandra 1.0.11 installed.
> Nodes have python-cql 1.0.10-1 and python2.6
>
> Cluster works well, BUT when I try to connect to the cqlsh I get:
> *cqlsh --debug --cqlversion=2 localhost 9160*
> Using CQL driver:  '/usr/lib/pymodules/python2.6/cql/__init__.pyc'>
> Using thrift lib:  '/usr/lib/pymodules/python2.6/thrift/__init__.pyc'>
> Connection error: Invalid method name: 'set_cql_version'
> *
> *This is the same if I chose cqlversion=3*
>
> *Any idea how to solve?*
>
> *Thanks,*
>
> Tamar Fraenkel *
> Senior Software Engineer, TOK Media
>
> [image: Inline image 1]
>
> ta...@tok-media.com
> Tel:   +972 2 6409736
> Mob:  +972 54 8356490
> Fax:   +972 2 5612956
>
>
>
>
<>

get_range_slice gets no rowcache support?

2012-11-07 Thread Manu Zhang
I've asked this question before. And after reading the source codes, I find
that get_range_slice doesn't query rowcache before reading from Memtable
and SSTable. I just want to make sure whether I've overlooked something. If
my observation is correct, what's the consideration here?


Re: can't start cqlsh on new Amazon node

2012-11-07 Thread Tamar Fraenkel
Nope...
Same error:

*cqlsh --debug --cql3 localhost 9160*
Using CQL driver: 
Using thrift lib: 
Connection error: Invalid method name: 'set_cql_version'

I believe it is some version mismatch. But this was DataStax AMI, I thought
all should be coordinated, and I am not sure what to check for.

Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Nov 8, 2012 at 4:56 AM, Jason Wee  wrote:

> should it be --cql3 ?
> http://www.datastax.com/docs/1.1/dml/using_cql#start-cql3
>
>
>
> On Wed, Nov 7, 2012 at 11:16 PM, Tamar Fraenkel wrote:
>
>> Hi!
>> I installed new cluster using DataStax AMI with --release 1.0.11, so I
>> have cassandra 1.0.11 installed.
>> Nodes have python-cql 1.0.10-1 and python2.6
>>
>> Cluster works well, BUT when I try to connect to the cqlsh I get:
>> *cqlsh --debug --cqlversion=2 localhost 9160*
>> Using CQL driver: > '/usr/lib/pymodules/python2.6/cql/__init__.pyc'>
>> Using thrift lib: > '/usr/lib/pymodules/python2.6/thrift/__init__.pyc'>
>> Connection error: Invalid method name: 'set_cql_version'
>> *
>> *This is the same if I chose cqlversion=3*
>>
>> *Any idea how to solve?*
>>
>> *Thanks,*
>>
>> Tamar Fraenkel *
>> Senior Software Engineer, TOK Media
>>
>> [image: Inline image 1]
>>
>> ta...@tok-media.com
>> Tel:   +972 2 6409736
>> Mob:  +972 54 8356490
>> Fax:   +972 2 5612956
>>
>>
>>
>>
>
<><>