On 06/07/2013 06:02 PM, Mark Lewandowski wrote:
I'm currently trying to get Cassandra (1.2.5) and Pig (0.11.1) to play
nice together. I'm running a basic script:
rows = LOAD 'cassandra://keyspace/colfam' USING CassandraStorage();
dump rows;
This fails for my column family which has ~100,000 r
I am interested to know if the compaction directive is the key because I
have the same symptoms on Ubuntu Server 12.04 64 bit C* 1.2.4 with a CF
of ~ > half mil records 6,000 chars each.
I can only get back max 6,000 records read in cqlsh, so, if I query
SELECT COUNT(*) FROM A_CF LIMIT 6000; I
Hi,
We are seeing an issue where data that was written to the cluster is no longer
accessible after trying to expand the size of the cluster. I will try and
provide as much information as possible, I am just starting at with Cassandra
and I'm not entirely sure what data is relevant.
All Cassa
I am seeing the similar behavior, in my case I have 2 nodes in each
datacenter and one node always has high latency (equal to the latency
between the two datacenters). When one of the datacenters is shutdown the
latency drops.
I am curious to know whether anyone else has these issues and if yes ho
Hi,
how should the bulk loader be modified to support composite columns?
Thanks,
Davide
On 7 Jun 2013, at 10:56, Keith Wright wrote:
> Looking into it further, I believe your issue is that you did not define the
> table with compact storage. Without that, CQL3 will treat every column as a
>