* Graham Sanderson , 2015-07-13 18:21:08 Mon:
> > Is there a set of best practices for this kind of workload? We would
> > like to avoid interfering with reads as much as possible.
> Ironically in my experience the fastest ways to get data into C* are
> considered “anti-patterns” by most (but I hav
Ironically in my experience the fastest ways to get data into C* are considered
“anti-patterns” by most (but I have no problem saturating multiple gigabit
network links if I really feel like inserting fast)
It’s been a while since I tried some of the newer approaches though (my fast
load code i
"user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Friday, May 31, 2013 9:01 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Bulk loading into
> )
>
> You can see the source for CompositeSerializer here:
> http://grepcode.com/file/repo1.maven.org/maven2/com.netflix.astyanax/astyanax/1.56.26/com/netflix/astyanax/serializers/CompositeSerializer.java
>
> Good luck!
>
> From: Daniel Morton
> Reply-To: "user@cas
ache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Bulk loading into CQL3 Composite Columns
Hi Keith... Thanks for the help.
I'm presently not importing the Hector library (Which is where classes like
CompositeSerializer and Stri
umnComposite), null,
>> System.currentTimeMillis() );
>>
>> From: Keith Wright
>> Date: Thursday, May 30, 2013 3:32 PM
>> To: "user@cassandra.apache.org"
>> Subject: Re: Bulk loading into CQL3 Composite Columns
>>
>> You do not want to repeat t
Hi Keith... Thanks for the help.
I'm presently not importing the Hector library (Which is where classes like
CompositeSerializer and StringSerializer come from, yes?), only the
cassandra-all maven artifact. Is the behaviour of the CompositeSerializer
much different than using a Builder from a Com
>
> ssTableWriter.addColumn(
> CompositeSerializer.get().toByteBuffer(columnComposite), null,
> System.currentTimeMillis() );
>
> From: Keith Wright
> Date: Thursday, May 30, 2013 3:32 PM
> To: "user@cassandra.apache.org"
> Subject: Re: Bulk loading into CQL3 Compo
ssandra.apache.org>>
Subject: Re: Bulk loading into CQL3 Composite Columns
You do not want to repeat the first item of your primary key again. If you
recall, in CQL3 a primary key as defined below indicates that the row key is
the first item (key) and then the column names are composites of
You do not want to repeat the first item of your primary key again. If you
recall, in CQL3 a primary key as defined below indicates that the row key is
the first item (key) and then the column names are composites of val1,val2.
Although I don't see why you need val2 as part of the primary key
CQL 3 tables that do not use compact storage store use Composite Types , which
other code may not be expecting.
Take a look at the CQL 3 table definitions through cassandra-cli and you may
see the changes you need to make when creating the SSTables.
Cheers
-
Aaron Morton
Free
Yes. See the example here http://www.datastax.com/dev/blog/bulk-loading
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 4/05/2012, at 2:49 AM, Oleg Proudnikov wrote:
> Hello, group
>
> Will the bulk loader preserve original column timestam
On Thu, Apr 5, 2012 at 10:58 AM, Benoit Perroud wrote:
> ERROR [Thread-23] 2012-04-05 09:58:12,252 AbstractCassandraDaemon.java
> (line 139) Fatal exception in thread Thread[Thread-23,5,main]
> java.lang.RuntimeException: Insufficient disk space to flush
> 7813594056494754913 bytes
> at
>
At that scale of data, and the fact that it's a batch job, I would go with the
bulk loading tool.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/10/2011, at 3:32 AM, Mike Rapuano wrote:
> We are not currently live but testing with Cas
That is my understanding.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 18/08/2011, at 12:36 AM, Philippe wrote:
>> What if the column is a counter ? Does it overwrite or increment ? Ie if the
>> SST I am loading has the exact
>
> What if the column is a counter ? Does it overwrite or increment ? Ie if
> the SST I am loading has the exact same setup but value 2, will my value
> change to 3 ?
>
> Counter columns only know how to increment (assuming no deletes), so you
> will get 3. See
> https://github.com/apache/cassandr
> If I SSTLoad data into that KS & CF that has the same key, it will rely on
> timestamps stored in the SSTable to overwrite value "1" or not, right ?
yes.
> What if the column is a counter ? Does it overwrite or increment ? Ie if the
> SST I am loading has the exact same setup but value 2, will
Hello Torsten,
I am working on Cassandra for last 4 weeks and am trying to
load large amount of data.Data is in a csv file.I am trying to use the Bulk
loading technique but am not clear with the process.Could you please explain
the process for the bulk load?
Thanks,
Priyanka
-
Hello All,
I am trying to load huge amounts of data into Cassandra.I want you use bulk
loading with hadoop.
I look into bulkoloader utility in java.But I am not sure how to provide
input to hadoop and then load to cassandra.Could some one please explain the
process?
Thank you.
Regards,
Pr
I looked at the thrift service implementation and got it working.
(Much faster import!)
Thanks!
On Mon, Jun 21, 2010 at 13:09, Oleg Anastasjev wrote:
> Torsten Curdt vafer.org> writes:
>
>>
>> First I tried with my one "cassandra -f" instance then I saw this
>> requires a separate IP. (Why?)
>
Torsten Curdt vafer.org> writes:
>
> First I tried with my one "cassandra -f" instance then I saw this
> requires a separate IP. (Why?)
This is because your import program becomes a special member of cassandra
cluster to be able to speak internal protocol. And each memboer of cassandra
cluster
> You should be using the thrift API, or a wrapper around the thrift API. It
> looks like you're using internal cassandra classes.
The goal is to get around using the overhead of the Thrift API for a
bulk import.
> There is a Java wrapper called Hector, and there was another talked about on
> t
You should be using the thrift API, or a wrapper around the thrift API. It
looks like you're using internal cassandra classes.
There is a Java wrapper called Hector, and there was another talked about on
the mail list recently.
There is also a bulk import / export tool see
http://wiki.apache.o
23 matches
Mail list logo