Huilang,
Since there hasn't been another reply yet, I'll throw out an idea that worked
for us as part of a test, though it does not seem exactly like a "preferred"
way since it crosses code-bases. We built the type using straight java type,
then used the Datastax v2 driver's DataType class s
This is a basic question, but having heard that advice before, I'm curious
about why the minimum recommended replication factor is three? Certainly
additional redundancy, and, I believe, a minimum threshold for paxos. Are there
other reasons?
On Jun 7, 2014 10:52 PM, Colin wrote:
To have any r
I should be
thinking about for that sort of batch updating?
James Campbell
From: Aaron Morton
Sent: Thursday, June 5, 2014 5:26 AM
To: Cassandra User
Cc: charlie@gmail.com
Subject: Re: Consolidating records and TTL
As Tyler says, with atomic batches which ar
Maciej,
In CQL3 "wide rows" are expected to be created using clustering columns. So
while the schema will have a relatively smaller number of named columns, the
effect is a wide row. For example:
CREATE TABLE keyspace.widerow (
row_key text,
wide_row_column text,
data_column text,
PRIMA
n Fri, May 16, 2014 at 10:29 AM, James Campbell
mailto:ja...@breachintelligence.com>> wrote:
Hi all-
What partition type is best/most commonly used for a multi-disk JBOD setup
running Cassandra on CentOS 64bit?
The datastax production server guidelines recommend XFS for data partitio
Hi all-
What partition type is best/most commonly used for a multi-disk JBOD setup
running Cassandra on CentOS 64bit?
The datastax production server guidelines recommend XFS for data partitions,
saying, "Because Cassandra can use almost half your disk space for a single
file, use XFS when usin
Hi Cassandra Users-
I have a Hadoop job that uses the pattern in Cassandra 2.0.6's
hadoop_cql3_word_count example to load data from HDFS into Cassandra. Having
read about BulkOutputFormat as a way to potentially significantly increase the
write throughput from Hadoop to Cassandra, I am conside