IMO deleting is always better. It is better to not store the column if
there is no value associated.
-Naren
On Fri, Feb 10, 2012 at 12:15 PM, Drew Kutcharian wrote:
> Hi Everyone,
>
> Let's say I have the following object which I would like to save in
> Cassandra:
>
> class User {
> UUID id; /
It is good to have short column names. They save space all the way from
network transfer to in-memory usage to storage. It is also good idea to
club immutables columns that are read together and store as single column.
We gained significant overall performance benefits with this.
-Naren
On Fri, F
Either one works fine. Setting to "" may cause you less headaches as
you won't have to deal with tombstones. Deleting a non existent column
is fine.
-Jeremiah
On 02/10/2012 02:15 PM, Drew Kutcharian wrote:
Hi Everyone,
Let's say I have the following object which I would like to save in Cas
I found the problem; it was my fault. I made an accidental change to my
cassandra.yaml file sometime between restarts and ended up pointing the
node data directory to a different disk. Check your paths!
Todd
On 2/10/2012 10:22 AM, Todd Fast wrote:
My single-node cluster was working fine yeste
What are the implications of using short vs long column names? Is it better to
use short column names or longer ones?
I know for MongoDB you are better of using short field names
http://www.mongodb.org/display/DOCS/Optimizing+Storage+of+Small+Objects Does
this apply to Cassandra column names?
Hi Everyone,
Let's say I have the following object which I would like to save in Cassandra:
class User {
UUID id; //row key
String name; //columnKey: "name", columnValue: the name of the user
String description; //columnKey: "description", columnValue: the description
of the user
}
Descri
Could any one tell me how can we copy data from Cassandra-Brisk cluster to
Hadoop-HDFS cluster?
1) Is there a way to do hadoop distcp between clusters?
2) If hive table is created on Brisk cluster, will it similar like HDFS
file format? can we run map reduce on the other cluster to transform hive
My single-node cluster was working fine yesterday. I ctrl+c'd it last
night, as I typically do, and restarted it this morning.
Now, inexplicably, it doesn't know anything about my keyspace. The SS
table files are in the same directory as always and seem to be the
expected size. I can't seem to
Thanks. Setting the broadcast address to the external IP address and setting
the listen_address to 0.0.0.0 seems to have fixed it. Does that mean that all
other nodes, even those on the same local network, will communicate with that
node using it's external IP address? It would be much better
all,
could somebody clue me to why the below code doesn't work. my schema is;
create column family StockHistory
with comparator = 'CompositeType(LongType,UTF8Type)'
and default_validation_class = 'UTF8Type'
and key_validation_class = 'UTF8Type';
the time part works but i'm gettin
> Also, if there's hot spot is there any way out of it, other than restarting
> from scratch…
A cluster with a changed partitioner is like a mule with a spinning wheel. No
one knows how it changed and danged if it knows how to return your data .
(You cannot change it.)
By uniform I meat evenly
> How to do it ? Do I need to build a custom plugin/sink or can I configure an
> existing sink to write data in a custom way ?
This is a good starting point https://github.com/thobbs/flume-cassandra-plugin
> 2 - My business process also use my Cassandra DB (without flume, directly via
> thrift),
12 matches
Mail list logo