Re: Problems using Thrift API in C

2011-08-01 Thread Aleksandrs Saveljevs
No, at least not at the default logging level. However, we have solved 
the problem by checking out the latest revision of Thrift from the 
official repository, so it seems that it was not Cassandra's problem.


On 07/29/2011 10:13 PM, ruslan usifov wrote:

Do you have any error messages in cassandra log?

2011/7/28 Aleksandrs Saveljevs mailto:aleksandrs.savelj...@zabbix.com>>

Dear all,

We are considering using Cassandra for storing gathered data in
Zabbix (see https://support.zabbix.com/__browse/ZBXNEXT-844
 for more details).
Because Zabbix is written in C, we are considering using Thrift API
in C, too.

However, we are running into problems trying to get even the basic
code work. Consider the attached source code. This is essentially a
rewrite of the first part of the C++ example given at
http://wiki.apache.org/__cassandra/ThriftExamples#C.2B-__.2B-
 . If we
run it under strace, we see that it hangs on the call to recv() when
setting keyspace:

$ strace -s 64 ./test
...
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(9160),
sin_addr=inet_addr("127.0.0.1"__)}, 16) = 0
send(3,

"\0\0\0/\200\1\0\1\0\0\0\fset___keyspace\0\0\0\0\v\0\1\0\0\0\__vmy_keyspace\0",
47, 0) = 47
recv(3, ^C 

If we run the C++ example, it passes this step successfully. Does
anybody know where the problem is? We are using Thrift 0.6.1 and
Cassandra 0.8.1.

Also, what is the current state of Thrift API in C? Can it be
considered stable? Has anybody used it successfully? Any examples?

Thanks,
Aleksandrs




Re: Read latency is over 1 minute on a column family with 400,000 rows

2011-08-01 Thread aaron morton
Having 2056 live SSTables is very odd. Minor compaction should automatically 
reduce that number. What settings for min_compaction_threshold and 
max_compaction_threshold did you use when creating the CF ? 

You can check them with node tool getcompactionthreshold . The default is min 4 
and max 32. 

You can also set them using nodetool setcompactionthreshold (Note the value is 
not persisted across a restart). 

To provoke a minor compaction run nodetool flush. Minor compactions are 
preferred to major as they keep SSTables with the compaction buckets, where as 
major creates a single large file which will not be compacted again for a long 
time.  

Hope that helps.

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 1 Aug 2011, at 17:08, Teijo Holzer wrote:

> Looks like a broken node, just restart Cassandra on that node. Might want to 
> wait for the compaction to finish on the other nodes.
> 
> Also, don't forget to JMX gc() manually after the compaction has finished to 
> delete the files on each node.
> 
> On 01/08/11 16:29, myreasoner wrote:
>> On the node that the compaction returned almost immediately:
>> 
>> *woot@n50:~$ /opt/cassandra/bin/nodetool -h localhost compactionstats
>> pending tasks: 66*
>> 
>> However, messages shown on other nodes are:
>> compaction type: Major
>> keyspace: MyKeyspace
>> column family: Fingerprint
>> bytes compacted: 25505066421
>> bytes total: 45573108438
>> compaction progress: 55.97%
>> -
>> pending tasks: 1
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-latency-is-over-1-minute-on-a-column-family-with-400-000-rows-tp6639649p6639836.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
>> Nabble.com.
> 



Odp: Read latency is over 1 minute on a column family with 400,000 rows

2011-08-01 Thread Meler Wojciech
Upgrade at least to 0.8.1 as your version has broken compaction...

Reagards
Wojtek Meler

- Reply message -
Od: "myreasoner" 
Data: pon., sie 1, 2011 04:23
Temat: Read latency is over 1 minute on a column family with 400,000 rows
Do: "cassandra-u...@incubator.apache.org" 

Hi, my read latency is really horrible and I can't figure out what went
wrong.  I'm running cassandra 0.8.0 on a 5 machine cluster.  The Fingerprint
ColumnFamily has 400,000 rows, each row has about 4,000 Super columns, and
each super column has 1 to 4 columns.  One row looks like:

RowKey: 00c26f
=> (super_column=008002c161f008566a4931d6efeab128ef,
 (column=183e9d10-b5f0-11e0-b0f4-0025901867fb, value=,
timestamp=1311510352604000))
=> (super_column=008004c34cafa12e22acbf3c2aab9b15ef,
 (column=e6371bf6-b72c-11e0-b201-0025901867fb, value=,
timestamp=1311646419206000)
 (column=e6371c00-b72c-11e0-b201-0025901867fb, value=,
timestamp=1311646419206000))
=> (super_column=0080097ac5154a96ea8620784ea3b5b56f,
 (column=7691b846-b6fc-11e0-a703-003048f330bb, value=,
timestamp=1311625615955000))
...


On cassandra-cli doing *get Fingerprint[rowkey][SuperColumnName]* usually
takes over 60 seconds to return, which almost make the read un-useable.  Is
there anything I can tune?

Here are the stats for a column family Fingerprint.
Column Family: Fingerprint
SSTable count: 2056
Space used (live): 16407493
Space used (total): 16407493
Memtable Columns Count: 177451
Memtable Data Size: 119948171
Memtable Switch Count: 366
Read Count: 5
*Read Latency: 74487.252 ms.*
Write Count: 30023
Write Latency: 1.602 ms.
Pending Tasks: 0
Key cache capacity: 20
Key cache size: 8157
Key cache hit rate: 0.0138263555929
Row cache: disabled
Compacted row minimum size: 104
Compacted row maximum size: 315852
Compacted row mean size: 33709



--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-latency-is-over-1-minute-on-a-column-family-with-400-000-rows-tp6639649p6639649.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.



"WIRTUALNA POLSKA" Spolka Akcyjna z siedziba w Gdansku przy ul. Traugutta 115 
C, wpisana do Krajowego Rejestru Sadowego - Rejestru Przedsiebiorcow 
prowadzonego przez Sad Rejonowy Gdansk - Polnoc w Gdansku pod numerem KRS 
068548, o kapitale zakladowym 67.980.024,00 zlotych oplaconym w calosci 
oraz Numerze Identyfikacji Podatkowej 957-07-51-216.


Re: cassandra server disk full

2011-08-01 Thread Ryan King
On Fri, Jul 29, 2011 at 12:02 PM, Chris Burroughs
 wrote:
> On 07/25/2011 01:53 PM, Ryan King wrote:
>> Actually I was wrong– our patch will disable gosisp and thrift but
>> leave the process running:
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-2118
>>
>> If people are interested in that I can make sure its up to date with
>> our latest version.
>
> Thanks Ryan.
>
> /me expresses interest.
>
> Zombie nodes when the file system does something "interesting" are not fun.

In our experience this only gets triggered on hardware failures that
would otherwise seriously degrade the performance or cause lots of
errors.

After the nodes traffic coalesces we get an alert which we can then deal with.

-ryan


Re: Read latency is over 1 minute on a column family with 400,000 rows

2011-08-01 Thread myreasoner
It was set the min/4 max/32

Current compaction thresholds for MyKeyspace/Fingerprint:
 min = 4,  max = 32

What could possibly cause cassandra to ignore these settings?

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-latency-is-over-1-minute-on-a-column-family-with-400-000-rows-tp6639649p6642138.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Read latency is over 1 minute on a column family with 400,000 rows

2011-08-01 Thread Jonathan Ellis
Why do you think it's ignoring it?

In the output you pasted it said "I'm currently busy with a compaction
and I have a backlog of 66 more to get to after that."

On Mon, Aug 1, 2011 at 1:51 PM, myreasoner  wrote:
> It was set the min/4 max/32
>
> Current compaction thresholds for MyKeyspace/Fingerprint:
>  min = 4,  max = 32
>
> What could possibly cause cassandra to ignore these settings?
>
> --
> View this message in context: 
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-latency-is-over-1-minute-on-a-column-family-with-400-000-rows-tp6639649p6642138.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
> Nabble.com.
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: Read latency is over 1 minute on a column family with 400,000 rows

2011-08-01 Thread myreasoner
All compaction related settings in the yaml were untouched.  The fingerprint
column family has been populated three days ago and the cpu/disk usage were
pretty low.  I'd think Cassandra will silently start the compaction thread
on my behalf and try to preserve the min/max thresholds, rather than waiting
for a major compaction order from nodetool explicitly.

Anyway, I did a major compaction on all 5 nodes almost at the same time.  4
of them came back after a few hours, but one of the 5 nodes still has a lot
of pending ones:

cassandra/bin/nodetool -h localhost compactionstats
pending tasks: 76

And the uptime is very light.
 14:31:44 up 30 days, 22:11,  4 users,  load average: 0.29, 0.58, 0.58

Some reply suggested this is a broken compaction.  I will wait for a few
hours and restart that node if nothing changes.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-latency-is-over-1-minute-on-a-column-family-with-400-000-rows-tp6639649p6642279.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Read latency is over 1 minute on a column family with 400,000 rows

2011-08-01 Thread Jonathan Ellis
You really need to upgrade from 0.8.0 to fix that.  Restarting won't
help, much (you'll get exactly one compaction against a given sstable,
before it stops working again).

On Mon, Aug 1, 2011 at 2:34 PM, myreasoner  wrote:
> All compaction related settings in the yaml were untouched.  The fingerprint
> column family has been populated three days ago and the cpu/disk usage were
> pretty low.  I'd think Cassandra will silently start the compaction thread
> on my behalf and try to preserve the min/max thresholds, rather than waiting
> for a major compaction order from nodetool explicitly.
>
> Anyway, I did a major compaction on all 5 nodes almost at the same time.  4
> of them came back after a few hours, but one of the 5 nodes still has a lot
> of pending ones:
>
> cassandra/bin/nodetool -h localhost compactionstats
> pending tasks: 76
>
> And the uptime is very light.
>  14:31:44 up 30 days, 22:11,  4 users,  load average: 0.29, 0.58, 0.58
>
> Some reply suggested this is a broken compaction.  I will wait for a few
> hours and restart that node if nothing changes.
>
> --
> View this message in context: 
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-latency-is-over-1-minute-on-a-column-family-with-400-000-rows-tp6639649p6642279.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
> Nabble.com.
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: Read latency is over 1 minute on a column family with 400,000 rows

2011-08-01 Thread myreasoner
Thanks.  I will upgrade to 0.8.1 then.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-latency-is-over-1-minute-on-a-column-family-with-400-000-rows-tp6639649p6642582.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Cassandra bulk import confusion

2011-08-01 Thread aaron morton
Incase you missed it, fresh off the press 
http://www.datastax.com/dev/blog/bulk-loading

Cheers

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 30 Jul 2011, at 04:10, Jeff Schmidt wrote:

> Hello:
> 
> I'm relatively new to Cassandra, but I've been searching around, and it looks 
> like Cassandra 0.8.x has improved support for bulk importing of data.  I keep 
> finding references to the json2sstable command, and I've read about that on 
> the Datastax and Apache documentation pages.
> 
> There's a lot of detail here if you want it, otherwise please skip to the 
> end. json2sstable seems to run successfully, but I cannot see the data in the 
> new CF using the CLI.
> 
> My goal is extract data from various sources, munge it together in some 
> manner, and then bulk load it into Cassandra.  That is as opposed to using 
> Hector to programmatically insert the data.  I'd like to deploy these files 
> to the cloud (Puppet) and then instruct Cassndra to bulk load them, and then 
> inform the application that new data exists.  This is for a period content 
> update of certain column families of curated, read-only, data that occurs on 
> a monthly basis. I'm thinking of using JMX to signal the application to 
> switch to a new set of CFs and keep running w/o downtime.  At a later time, 
> I'll delete the old CFs.
> 
> I'm using Cassandra 0.8.2 and I'm just playing with this concept.  I create a 
> test CF using the CLI
> 
> [default@Ingenuity] use Test;
> Authenticated to keyspace: Test
> [default@Test] create column family TestCF with comparator = UTF8Type and 
> column_metadata = [{column_name: nodeId, validation_class: UTF8Type}];
> 28991070-b9f9-11e0--242d50cf1fb5
> Waiting for schema agreement...
> ... schemas agree across the cluster
> [default@Test] update column family TestCF with 
> key_validation_class=UTF8Type; 
> 2af88440-b9f9-11e0--242d50cf1fb5
> Waiting for schema agreement...
> ... schemas agree across the cluster
> [default@Test] set TestCF['SID|123']['nodeId'] = 'ING:001';  
> Value inserted.
> [default@Test] set TestCF['EG|3030']['nodeId'] = 'ING:002';  
> Value inserted.
> [default@Test] set TestCF['EG|3031']['nodeId'] = 'ING:003'; 
> Value inserted.
> [default@Test] list TestCF;
> Using default limit of 100
> ---
> RowKey: EG|3030
> => (column=nodeId, value=ING:002, timestamp=1311954072252000)
> ---
> RowKey: EG|3031
> => (column=nodeId, value=ING:003, timestamp=1311954073631000)
> ---
> RowKey: SID|123
> => (column=nodeId, value=ING:001, timestamp=1311954072249000)
> 
> 3 Rows Returned.
> [default@Test] 
> 
> Now, cassandra.yaml is stock, except I changed it to place the data in a 
> non-default location:
> 
> # directories where Cassandra should store data on disk.
> data_file_directories:
> - /usr/local/ingenuity/isec/cassandra/datastore/data
> 
> # commit log
> commitlog_directory: /usr/local/ingenuity/isec/cassandra/datastore/commitlog
> 
> # saved caches
> saved_caches_directory: 
> /usr/local/ingenuity/isec/cassandra/datastore/saved_caches
> 
> In that data directory:
> 
> [imac:datastore/data/Test] jas% pwd
> /usr/local/ingenuity/isec/cassandra/datastore/data/Test
> [imac:datastore/data/Test] jas% ls
> [imac:datastore/data/Test] jas% 
> 
> There is nothing there.  Perhaps Cassandra has not yet felt the need to write 
> the SSTables.  So, since I need to reference in actual data file with 
> sstable2json, I ran nodetool flush:
> 
> [imac:isec/cassandra/apache-cassandra-0.8.2] jas% bin/nodetool -h localhost 
> flush Test TestCF
> [imac:isec/cassandra/apache-cassandra-0.8.2] jas% 
> 
> Now, I have files!
> 
> [imac:datastore/data/Test] jas% pwd
> /usr/local/ingenuity/isec/cassandra/datastore/data/Test
> [imac:datastore/data/Test] jas% ls
> TestCF-g-1-Data.dbTestCF-g-1-Index.db
> TestCF-g-1-Filter.db  TestCF-g-1-Statistics.db
> [imac:datastore/data/Test] jas% 
> 
> Given that, I'm able run sstable2json and I can see I'm getting what's in 
> that CF:
> 
> [imac:isec/cassandra/apache-cassandra-0.8.2] jas%  bin/sstable2json 
> /usr/local/ingenuity/isec/cassandra/datastore/data/Test/TestCF-g-1-Data.db > 
> testcf.jason
> [imac:isec/cassandra/apache-cassandra-0.8.2] jas% cat testcf.jason 
> {
> "45477c33303330": [["nodeId","ING:002",1311954072252000]],
> "45477c33303331": [["nodeId","ING:003",1311954073631000]],
> "5349447c313233": [["nodeId","ING:001",1311954072249000]]
> }
> [imac:isec/cassandra/apache-cassandra-0.8.2] jas% 
> 
> Oops, okay, that file extension should be json not jason, but oh well... :)
> 
> Okay, so I now I have data in the proper format for importing with 
> json2sstable.  Like I said, I want to import this data into a new CF. Let's 
> call it TestCF2 (in the same keyspace):
> 
> [default@Test] create column family TestCF2 with comparator = UTF8Type and 
> column_metadata = [{column_name: nodeId, validation_class: UTF8Type}];
> 4dcc44b

implications of using more keyspaces vs single keyspace?

2011-08-01 Thread Yang
for example my data consists of "salary", "office stationery list",

let's say I do use the same replicationStrategy for  them, these 2
data sets have
different key ranges, key distributions,

then is it better to use separate keyspaces for each of them? or use a
single one?

the factors I can think of:
separate: have to call set_keyspace() if your calls switch between datasets
potential to change to different replication factor in
the future

any thoughts?

Thanks a lot
Yang


Schema Disagreement

2011-08-01 Thread Yi Yang
Dear all,

I'm always meeting mp with schema disagree problems while trying to create a 
column family like this, using cassandra-cli:

create column family sd
with column_type = 'Super' 
and key_validation_class = 'UUIDType'
and comparator = 'LongType'
and subcomparator = 'UTF8Type'
and column_metadata = [
{
column_name: 'time', 
validation_class : 'LongType'
},{
column_name: 'open', 
validation_class : 'FloatType'
},{
column_name: 'high', 
validation_class : 'FloatType'
},{
column_name: 'low', 
validation_class : 'FloatType'
},{
column_name: 'close', 
validation_class : 'FloatType'
},{
column_name: 'volumn', 
validation_class : 'LongType'
},{
column_name: 'splitopen', 
validation_class : 'FloatType'
},{
column_name: 'splithigh', 
validation_class : 'FloatType'
},{
column_name: 'splitlow', 
validation_class : 'FloatType'
},{
column_name: 'splitclose', 
validation_class : 'FloatType'
},{
column_name: 'splitvolume',
validation_class : 'LongType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
}
]
;

I've tried to erase everything and restart Cassandra but this still happens.   
But when I clear the column_metadata section this no more disagreement error.   
Do you have any idea why this happens?

Environment: 2 VMs, using the same harddrive, Cassandra 0.8.1, Ubuntu 10.04
This is for testing only.   We'll move to dedicated servers later.

Best regards,
Yi


Re: Schema Disagreement

2011-08-01 Thread Dikang Gu
I thought the schema disagree problem was already solved in 0.8.1...

On possible solution is to decommission the disagree node and rejoin it.


On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang  wrote:

> Dear all,
>
> I'm always meeting mp with schema disagree problems while trying to create
> a column family like this, using cassandra-cli:
>
> create column family sd
>with column_type = 'Super'
>and key_validation_class = 'UUIDType'
>and comparator = 'LongType'
>and subcomparator = 'UTF8Type'
>and column_metadata = [
>{
>column_name: 'time',
>validation_class : 'LongType'
>},{
>column_name: 'open',
>validation_class : 'FloatType'
>},{
>column_name: 'high',
>validation_class : 'FloatType'
>},{
>column_name: 'low',
>validation_class : 'FloatType'
>},{
>column_name: 'close',
>validation_class : 'FloatType'
>},{
>column_name: 'volumn',
>validation_class : 'LongType'
>},{
>column_name: 'splitopen',
>validation_class : 'FloatType'
>},{
>column_name: 'splithigh',
>validation_class : 'FloatType'
>},{
>column_name: 'splitlow',
>validation_class : 'FloatType'
>},{
>column_name: 'splitclose',
>validation_class : 'FloatType'
>},{
>column_name: 'splitvolume',
>validation_class : 'LongType'
>},{
>column_name: 'splitclose',
>validation_class : 'FloatType'
>}
>]
> ;
>
> I've tried to erase everything and restart Cassandra but this still
> happens.   But when I clear the column_metadata section this no more
> disagreement error.   Do you have any idea why this happens?
>
> Environment: 2 VMs, using the same harddrive, Cassandra 0.8.1, Ubuntu 10.04
> This is for testing only.   We'll move to dedicated servers later.
>
> Best regards,
> Yi
>



-- 
Dikang Gu

0086 - 18611140205


Re: implications of using more keyspaces vs single keyspace?

2011-08-01 Thread Edward Capriolo
On Mon, Aug 1, 2011 at 6:08 PM, Yang  wrote:

> for example my data consists of "salary", "office stationery list",
>
> let's say I do use the same replicationStrategy for  them, these 2
> data sets have
> different key ranges, key distributions,
>
> then is it better to use separate keyspaces for each of them? or use a
> single one?
>
> the factors I can think of:
> separate: have to call set_keyspace() if your calls switch between datasets
>potential to change to different replication factor in
> the future
>
> any thoughts?
>
> Thanks a lot
> Yang
>

Ah interesting question.

In the old days operations a operations like get() took keyspace as the
first string argument. Now changing keyspace requires running
setKeyspace(String) which is an extra RPC operation. If you want to interact
with two keyspaces you either need to keep two connection pools open, or you
have to use an RPC call every time you want to change keyspaces. While the
smaller signature for the get() is nice having the extra RPC call is not
good.

However as you mentioned you can only apply different replication factors on
the keyspace level. That is nice especially if you find one column family is
not as important as another. Since a keyspace is a folder you can also mount
a keyspace on a different physical device.

I still like one column family per keyspace, but having N connection pools
for N keyspaces complicates things.


UnavailableException on first time setup

2011-08-01 Thread Mike Stults
I have just started with a cassandra, so maybe a simple configuration problem?

from a java program,  with consistency level set to ANY,
Exception in thread "main" UnavailableException()
at 
org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:19053)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:1035)
at 
org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:1009)

from the command line, dong a simple "set" I get a null return

I have
- untared 0.8.2
- changed conf/cassandra.yaml file parameters to not to point to /var, but some 
local area ../cassandra_ops
- set to empty --> listen_address and rpc_address
- started without being root

thanks,

Mike Stults








Re: Secondary index on composite columns?

2011-08-01 Thread Boris Yen
Hi Jonathan,

AFAIK, you might change the internal implementation of super column family
by using the composite column. Does this mean that maybe the secondary index
will be supported on super columns in the future? will you use composite
column to add more capability to super column family or we are still advised
not to use super column family when possible?

Regards
Boris

On Mon, Aug 1, 2011 at 10:25 AM, Jonathan Ellis  wrote:

> Sure, but it's still only useful for equality predicates.
>
> On Sun, Jul 31, 2011 at 8:50 PM, Boris Yen  wrote:
> > Hi,
> > I was wondering if anyone would know if secondary index can be enabled on
> > composite columns?
> > Regards
> > Boris
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>


cassandra compile

2011-08-01 Thread Donna Li
 

All:

I compile the cassandra source code by ant. The following error print:

 

BUILD FAILED

E:\work\research\cassandra\apache-cassandra-0.6.3-src\apache-cassandra-0
.6.3-src

\build.xml:170: taskdef A class needed by class
com.thoughtworks.paranamer.ant.P

aranamerGeneratorTask cannot be found:
com/thoughtworks/paranamer/generator/Para

namerGenerator

 

How to resolve the problem? Thanks!

 

 

Best Regards

Donna li



Re: Secondary index on composite columns?

2011-08-01 Thread Jonathan Ellis
To the best of my ability to predict the future, we would probably
enhance "native" composite columns with those features, but not expose
them in the old supercolumn API.

So again, if supercolumns work for you, we won't pull the rug out from
under you, but don't start using them expecting them to become
something more advanced.

On Mon, Aug 1, 2011 at 9:05 PM, Boris Yen  wrote:
> Hi Jonathan,
> AFAIK, you might change the internal implementation of super column family
> by using the composite column. Does this mean that maybe the secondary index
> will be supported on super columns in the future? will you use composite
> column to add more capability to super column family or we are still advised
> not to use super column family when possible?
> Regards
> Boris
>
> On Mon, Aug 1, 2011 at 10:25 AM, Jonathan Ellis  wrote:
>>
>> Sure, but it's still only useful for equality predicates.
>>
>> On Sun, Jul 31, 2011 at 8:50 PM, Boris Yen  wrote:
>> > Hi,
>> > I was wondering if anyone would know if secondary index can be enabled
>> > on
>> > composite columns?
>> > Regards
>> > Boris
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of DataStax, the source for professional Cassandra support
>> http://www.datastax.com
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


cassandra compile

2011-08-01 Thread Donna Li
All:

 I use ant to compile cassandra. It is failure on ivy download, error 
is “Connection refused”, but I can get it from ie. So I download it and put in 
on dir /build. Then I compile again, it is blocked on ivy dependency download. 
I download the apache-rat-0.6.jar, commons-logging-1.1.1.jar, junit-4.6.jar, 
paranamer-ant-2.1.jar. I put these files to /build/lib and modify build/ivy.xml 
to prevent download. There still have errors, I wonder why the download is 
failure? If I download these files by hand, how to build successfully.

 

check-avro-generate:

 

BUILD FAILED

E:\work\research\cassandra\apache-cassandra-0.6.3-src\apache-cassandra-0.6.3-src

\build.xml:170: taskdef A class needed by class com.thoughtworks.paranamer.ant.P

aranamerGeneratorTask cannot be found: com/thoughtworks/paranamer/generator/Para

namerGenerator

 

Best Regards

Donna li

 

 



发件人: Donna Li 
发送时间: 2011年8月2日 10:16
收件人: 'user@cassandra.apache.org'
主题: cassandra compile

 

 

All:

I compile the cassandra source code by ant. The following error print:

 

BUILD FAILED

E:\work\research\cassandra\apache-cassandra-0.6.3-src\apache-cassandra-0.6.3-src

\build.xml:170: taskdef A class needed by class com.thoughtworks.paranamer.ant.P

aranamerGeneratorTask cannot be found: com/thoughtworks/paranamer/generator/Para

namerGenerator

 

How to resolve the problem? Thanks!

 

 

Best Regards

Donna li