is there a fast import data tool from oracle to cassandra besides java-coding?

2019-06-19 Thread Nimbus Lin
To cassandra's pioneers: Dear cassandra's pioneer, is there a fast import data tool from oracle to cassandra besides java-coding? I used copy from command to import oracle's date to cassandra's timestamp, but cassandra's timestamp validating is too strict to permit a

Bulk Import Question

2017-01-23 Thread Joe Olson
I am bulk importing a large number of sstables that I pre-generated using the bulk load process outlined at https://github.com/yukim/cassandra-bulkload-example I am using the 'sstableloader' utility to import them into a nine node Cassandra cluster. During the sstableloader ex

Bulk Import Question

2016-12-28 Thread Joe Olson
I'm following the example here for doing a bulk import into Cassandra: https://github.com/yukim/cassandra-bulkload-example Is there a way to get a number of rows written to a sstable set created via CQLSSTableWriter, without importing the sstable set into Cassandra? I'd like to do

Bulk Import Question

2016-11-23 Thread Joe Olson
I'm following the Cassandra bulk import example here: https://github.com/yukim/cassandra-bulkload-example Are the Cassandra data types inet, smallint, and tinyint supported by the bulk import CQLSSTableWriter ? I can't seem to get them to work...

Re: Import failure for use python cassandra-driver

2016-10-26 Thread Tyler Hobbs
_driver-3.7.0-cp27-none-linux_x86_64. > whl/cassandra/cqlengine/connection.py", > > line 20, in > > > > from cassandra.cluster import Cluster, _NOT_SET, NoHostAvailable, > > UserTypeDoesNotExist > > > > ImportError: > > /home/jasonl/.pex/i

Re: Import failure for use python cassandra-driver

2016-10-26 Thread Stefano Ortolani
b312c7494296cdd2b/cassandra_driver-3.7.0-cp27-none-linux_x86_64.whl/cassandra/cqlengine/connection.py", > line 20, in > > from cassandra.cluster import Cluster, _NOT_SET, NoHostAvailable, > UserTypeDoesNotExist > > ImportError: > /home/jasonl/.pex/install/ca

Re: Import failure for use python cassandra-driver

2016-10-26 Thread Zao Liu
Same happen to my ubuntu boxes. File "/home/jasonl/.pex/install/cassandra_driver-3. 7.0-cp27-none-linux_x86_64.whl. ebfb31ab99650d53ad134e0b312c7494296cdd2b/cassandra_driver-3. 7.0-cp27-none-linux_x86_64.whl/cassandra/cqlengine/connection.py", line 20, in from cassandra.clus

Import failure for use python cassandra-driver

2016-10-26 Thread Zao Liu
could cause this. File "/Library/Python/2.7/site-packages/cassandra/cqlengine/connection.py", line 20, in from cassandra.cluster import Cluster, _NOT_SET, NoHostAvailable, UserTypeDoesNotExist ImportError: dlopen(/Library/Python/2.7/site-packages/cassandra/cluster.so, 2): S

Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Abraham Elmahrek
Glad it's working. I'm a bit concerned that it didn't work. So just for future reference, removing quotes might be necessary as well. -Abe On Thu, Dec 25, 2014 at 12:39 PM, Vineet Mishra wrote: > Hi Abe, > > Thanks for your quick suggestion, even I already tried this as well and > unfortunatel

Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Vineet Mishra
Hi Abe, Thanks for your quick suggestion, even I already tried this as well and unfortunately this too didn't worked for my case. At last I got a work around to the problem, I got a understanding that since its was working fine given the same command through command terminal and was breaking when

Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Abraham Elmahrek
Seems like exec parses the command with StringTokenizer [1]. This may have some difficulty parsing with quotes. Try using the array form of this command to explicitly define your String tokens [2]. Ref: 1. http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec(java.lang.String,%20jav

Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Vineet Mishra
ailing lists at the same time. > > Thank you and merry Christmas, > Jens > > > > On Thu, Dec 25, 2014 at 2:24 PM, Vineet Mishra > wrote: > >> Hi All, >> >> I am facing a issue while Sqoop(Sqoop version: 1.4.3-cdh4.7.0) Import, I >> am having a Java t

Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Jens Rantil
. Thank you and merry Christmas, Jens On Thu, Dec 25, 2014 at 2:24 PM, Vineet Mishra wrote: > Hi All, > I am facing a issue while Sqoop(Sqoop version: 1.4.3-cdh4.7.0) Import, I am > having a Java threaded code to import data from multiple databases running > at different servers. >

Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Vineet Mishra
Hi All, I am facing a issue while Sqoop(Sqoop version: 1.4.3-cdh4.7.0) Import, I am having a Java threaded code to import data from multiple databases running at different servers. Currently I am doing a Java Process Execute something like to execute sqoop job, Runtime.getRuntime().exec("

Re: [Import csv to Cassandra] Taking too much time

2014-12-04 Thread Yuki Morishita
TableLoader to import csv ? > What is the best practice to use SStableLoader importing csv in Cassandra ? > > Best Regards! > Chao Yan > -- > My twitter:Andy Yan @yanchao727 > My Weibo:http://weibo.com/herewearenow > -- > > 2014-12-04 18:58 GMT+0

Re: [Import csv to Cassandra] Taking too much time

2014-12-04 Thread 严超
Thank you very much for your advice. Can you give me more advice for using SSTableLoader to import csv ? What is the best practice to use SStableLoader importing csv in Cassandra ? *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 <https://twitter.com/yanchao727>*

Re: [Import csv to Cassandra] Taking too much time

2014-12-04 Thread Akshay Ballarpure
Hello Chao Yan, CSV data import using Copy command in cassandra is always painful for large size file (say > 1Gig). CQL tool is not developed for performing such heavy operations instead try using SSTableLoader to import. Best Regards Akshay From: 严超 To: user@cassandra.apache.

[Import csv to Cassandra] Taking too much time

2014-12-04 Thread 严超
Hi, Everyone: I'm importing a CSV file into Cassandra, and it always get error:"Request did not complete within rpc_timeout ", then I have to continue my COPY command of cql again. And the CSV file is 2.2 G . It is taking a long time. How can I speed up csv file importing ? Is there

Re: CSV Import is taking huge time

2014-07-24 Thread Akshay Ballarpure
xperience certainty. IT Services Business Solutions Consulting From: Tyler Hobbs To: user@cassandra.apache.org Date: 07/24/2014 02:07 AM Subject: Re: CSV Import is taking huge time See https://issues.apache

Re: CSV Import is taking huge time

2014-07-23 Thread Akshay Ballarpure
onsulting From: "Jack Krupansky" To: Date: 07/23/2014 06:39 PM Subject: Re: CSV Import is taking huge time Is it compute bound or I/O bound? What does your cluster look like? -- Jack Krupansky From: Akshay Ballarpure Sent: Wednesday, July 23, 20

Re: CSV Import is taking huge time

2014-07-23 Thread Tyler Hobbs
e? > > -- Jack Krupansky > > *From:* Akshay Ballarpure > *Sent:* Wednesday, July 23, 2014 5:00 AM > *To:* user@cassandra.apache.org > *Subject:* CSV Import is taking huge time > > Hello, > I am trying copy command in Cassandra to import CSV file in to DB, Import > is

Re: CSV Import is taking huge time

2014-07-23 Thread Jack Krupansky
Is it compute bound or I/O bound? What does your cluster look like? -- Jack Krupansky From: Akshay Ballarpure Sent: Wednesday, July 23, 2014 5:00 AM To: user@cassandra.apache.org Subject: CSV Import is taking huge time Hello, I am trying copy command in Cassandra to import CSV file in to DB

CSV Import is taking huge time

2014-07-23 Thread Akshay Ballarpure
Hello, I am trying copy command in Cassandra to import CSV file in to DB, Import is taking huge time, any suggestion to improve it? id,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z 100,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26

Re: Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import

2013-06-21 Thread aaron morton
: > > Dear everyone. > > I'm Hiroshi Kise. > I will confirm with cqlsh of Cassandra-1.2.5, the behavior of the export / > import of data. > Using the Copy of cqlsh, the data included the “{“ and “[“ (= CollectionType) > case, > I think in the export / import proces

Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import

2013-06-20 Thread hiroshi.kise.rk
Dear everyone. I'm Hiroshi Kise. I will confirm with cqlsh of Cassandra-1.2.5, the behavior of the export / import of data. Using the Copy of cqlsh, the data included the “{“ and “[“ (= CollectionType) case, I think in the export / import process, data integrity is compromised. How

Re: Cassandra bulk import confusion

2013-04-19 Thread Viksit Gaur
> On 30 Jul 2011, at 04:10, Jeff Schmidt wrote: > Hello: > I'm relatively new to Cassandra, but I've been searching around, > and it looks like Cassandra 0.8.x >has improved support for bulk importing of data.  I keep finding > references to the json2sstable >command, and I've read about that o

Re: import

2012-04-01 Thread Maxim Potekhin
Since Python has a native csv module, it's trivial to achieve. I load lots of csv data into my database daily. Maxim On 3/27/2012 11:44 AM, R. Verlangen wrote: You can write your own script to parse the excel file (export as csv) and import it with batch inserts. Should be pretty easy i

Re: import

2012-03-27 Thread R. Verlangen
You can write your own script to parse the excel file (export as csv) and import it with batch inserts. Should be pretty easy if you have experience with those techniques. 2012/3/27 puneet loya > I want to import files from excel to cassandra? Is it possible?? > > Any tool that

import

2012-03-27 Thread puneet loya
I want to import files from excel to cassandra? Is it possible?? Any tool that can help?? Whats the best way?? Plz reply :)

Re: import data into cassandra

2011-09-18 Thread Benoit Perroud
text: > http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/import-data-into-cassandra-tp4627325p6801723.html > Sent from the cassandra-u...@incubator.apache.org mailing list archive at > Nabble.com. >

Re: import data into cassandra

2011-09-16 Thread nehalmehta
Hi, Is there a tool which imports data from large CSV files into Cassandra using Thrift API (If using JAVA, it would be great). Thanks, Nehal Mehta -- View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/import-data-into-cassandra-tp4627325p6801723

Import JSON sstable data

2011-09-02 Thread Zhong Li
Hi, I try to upload sstable data on cassandra 0.8.4 cluster with json2sstable tool. Each time I have to restart the node with new file imported and do repair for the column family, otherwise new data will not show. Any thoughts? Thanks, Zhong Li

Re: Cassandra bulk import confusion

2011-08-01 Thread aaron morton
'm getting what's in > that CF: > > [imac:isec/cassandra/apache-cassandra-0.8.2] jas% bin/sstable2json > /usr/local/ingenuity/isec/cassandra/datastore/data/Test/TestCF-g-1-Data.db > > testcf.jason > [imac:isec/cassandra/apache-cassandra-0.8.2] jas% cat testcf.jason > {

Cassandra bulk import confusion

2011-07-29 Thread Jeff Schmidt
73631000]], "5349447c313233": [["nodeId","ING:001",1311954072249000]] } [imac:isec/cassandra/apache-cassandra-0.8.2] jas% Oops, okay, that file extension should be json not jason, but oh well... :) Okay, so I now I have data in the proper format for importing with

Re: Best way to import data from Cassandra 0.6 to 0.8

2011-06-09 Thread aaron morton
that helps. - Aaron Morton Freelance Cassandra Developer @aaronmorton http://www.thelastpickle.com On 10 Jun 2011, at 03:13, JKnight JKnight wrote: > Dear all, > > Could you tell me the best way to import data from Cassandra 0.6 to 0.8? > Thank you very much. >

Best way to import data from Cassandra 0.6 to 0.8

2011-06-09 Thread JKnight JKnight
Dear all, Could you tell me the best way to import data from Cassandra 0.6 to 0.8? Thank you very much. -- Best regards, JKnight

Re: Import/Export of Schema Migrations

2011-05-16 Thread David Boxenhorn
What you describe below sounds like what I want to do. I think that the only additional thing I am requesting is to export the migrations from the dev cluster (since Cassandra already has a table that saves them - I just want that information!) so I can import it to the other clusters. This would

Re: Import/Export of Schema Migrations

2011-05-15 Thread aaron morton
g cluster, and >> eventually the production cluster. I don't want to do it by hand, because >> it's a painful and error-prone process. What I would like to do is export >> the last N migrations from the development cluster as a text file, with >> exactly the s

Re: Import/Export of Schema Migrations

2011-05-13 Thread David Boxenhorn
ng cluster, and > eventually the production cluster. I don't want to do it by hand, because > it's a painful and error-prone process. What I would like to do is export > the last N migrations from the development cluster as a text file, with > exactly the same format as the orig

Re: Import/Export of Schema Migrations

2011-05-13 Thread aaron morton
n cluster. I don't want to do it by hand, because > it's a painful and error-prone process. What I would like to do is export the > last N migrations from the development cluster as a text file, with exactly > the same format as the original text commands, and import them t

Import/Export of Schema Migrations

2011-05-12 Thread David Boxenhorn
hand, because it's a painful and error-prone process. What I would like to do is export the last N migrations from the development cluster as a text file, with exactly the same format as the original text commands, and import them to the staging and production clusters. I think the best place to do

Re: cassandra.yaml after a schema import

2010-08-23 Thread Peter Harrison
On Tue, Aug 24, 2010 at 12:28 AM, aaron morton wrote: > I think this may have been discussed before, but I cannot find any reference > to it. Just wanted to confirm how cassandra.yaml is used after the cluster is > initialised. > > Start a clean install of 0.7b1, use jconsol

Re: cassandra.yaml after a schema import

2010-08-23 Thread Aaron Morton
: > I think this may have been discussed before, but I cannot find any reference to it. Just wanted to confirm how cassandra.yaml is used after the cluster is initialised. > > Start a clean install of 0.7b1, use jconsole to import the schema from yaml. Drain the node and shut it down, t

Re: cassandra.yaml after a schema import

2010-08-23 Thread Sylvain Lebresne
). On Mon, Aug 23, 2010 at 2:28 PM, aaron morton wrote: > I think this may have been discussed before, but I cannot find any reference > to it. Just wanted to confirm how cassandra.yaml is used after the cluster is > initialised. > > Start a clean install of 0.7b1, use jconsole to im

cassandra.yaml after a schema import

2010-08-23 Thread aaron morton
I think this may have been discussed before, but I cannot find any reference to it. Just wanted to confirm how cassandra.yaml is used after the cluster is initialised. Start a clean install of 0.7b1, use jconsole to import the schema from yaml. Drain the node and shut it down, then remove the

Re: How to import data from MYSQL to Cassandra

2010-07-01 Thread Paul Prescod
addressable over the same network, you could just write a program that reads data from one, reorganizes it and writes it to the other. On Thu, Jul 1, 2010 at 1:33 PM, Rana Aich wrote: > Hi, > > Can someone please please throw some light how can I import the Data from > mysql into Cassa

Re: How to import data from MYSQL to Cassandra

2010-07-01 Thread Paul Brown
On Jul 1, 2010, at 1:33 PM, Rana Aich wrote: > Can someone please please throw some light how can I import the Data from > mysql into Cassandra cluster. > - Is there any tool available? > OR > - Do I have to write my own Client using Thrift that will read the export > file

How to import data from MYSQL to Cassandra

2010-07-01 Thread Rana Aich
Hi, Can someone please please throw some light how can I import the Data from mysql into Cassandra cluster. - Is there any tool available? OR - Do I have to write my own Client using Thrift that will read the export file (*.sql) and insert the record in the database. Thanks raich

Re: Some questions about using Binary Memtable to import data.

2010-05-20 Thread Jonathan Ellis
On Wed, May 19, 2010 at 1:37 AM, Peng Guo wrote: > Thanks for you information. > > I look at some source code of the implement. There still some question: > > 1 How did I know that the binary write message send to endpoint success? It doesn't. It's fire-and-forget. If you look at the example it

Re: Some questions about using Binary Memtable to import data.

2010-05-19 Thread Peng Guo
wrote: > 1. yes > 2. yes > 3. compaction will slow down the load > 4. it will flush the memtable > > On Tue, May 18, 2010 at 12:24 AM, Peng Guo wrote: > > Hi All: > > > > I am trying to use Binary Memtable to import a large number of data. > > >

Re: Some questions about using Binary Memtable to import data.

2010-05-18 Thread Jonathan Ellis
1. yes 2. yes 3. compaction will slow down the load 4. it will flush the memtable On Tue, May 18, 2010 at 12:24 AM, Peng Guo wrote: > Hi All: > > I am trying to use Binary Memtable to import a large number of data. > > But after I look at the wiki > intro:http://wiki.ap

Some questions about using Binary Memtable to import data.

2010-05-18 Thread Peng Guo
Hi All: I am trying to use Binary Memtable to import a large number of data. But after I look at the wiki intro: http://wiki.apache.org/cassandra/BinaryMemtable I have some questions about using BinaryMemtable 1. Will the data be replicated automatic? 2. Can we modify the data that

Re: Distributed export and import into cassandra

2010-05-03 Thread Jonathan Ellis
sstable2json does this. (you'd want to perform nodetool compact first, so there is only one sstable for the CF you want.) On Mon, May 3, 2010 at 6:17 AM, Utku Can Topçu wrote: > Hey All, > > I have a simple sample use case, > The aim is to export the columns in a column family into flat files wi

Distributed export and import into cassandra

2010-05-03 Thread Utku Can Topçu
Hey All, I have a simple sample use case, The aim is to export the columns in a column family into flat files with the keys in range from k1 to k2. Since all the nodes in the cluster is supposed to contain some of the distribution of data, is it possible to make each node dump its own local data v

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
Gotcha. No i don't see anything particularly interesting in the log. Do i need to turn on higher logging in log4j? here it is after i killed the client: INFO [main] 2010-04-21 14:25:52,166 DatabaseDescriptor.java (line 229) Auto DiskAccessMode determined to be standard INFO [main] 2010-04-2

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
what i mean by as data is processed is that the column size will grow in cassandra, but my client isn't ever writing large column size under a given row... Any idea whats going on here? On Wed, Apr 21, 2010 at 3:05 PM, Sonny Heer wrote: > What does OOM stand for? > > for a given insert the size

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
On Wed, Apr 21, 2010 at 5:05 PM, Sonny Heer wrote: > What does OOM stand for? out of memory > for a given insert the size is small (meaning the a single insert > operation only has about a sentence of data)  although as the insert > process continues, the columns under a given row key could pote

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
What does OOM stand for? for a given insert the size is small (meaning the a single insert operation only has about a sentence of data) although as the insert process continues, the columns under a given row key could potentially grow to be large. Is that what you mean? An operation entails: Re

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
then that's not the problem. are you writing large rows that OOM during compaction? On Wed, Apr 21, 2010 at 4:34 PM, Sonny Heer wrote: > They are showing up as completed?  Is this correct: > > > Pool Name                    Active   Pending      Completed > STREAM-STAGE                      0  

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
They are showing up as completed? Is this correct: Pool NameActive Pending Completed STREAM-STAGE 0 0 0 RESPONSE-STAGE0 0 0 ROW-READ-STAGE0 0 517446 L

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
you need to figure out where the memory is going. check tpstats, if the pending ops are large somewhere that means you're just generating insert ops faster than it can handle. On Wed, Apr 21, 2010 at 4:07 PM, Sonny Heer wrote: > note: I'm using the Thrift API to insert.  The commitLog directory

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
note: I'm using the Thrift API to insert. The commitLog directory continues to grow. The heap size continues to grow as well. I decreased MemtableSizeInMB size, but noticed no changes. Any idea what is causing this, and/or what property i need to tweek to alleviate this? What is the "insert th

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts On Wed, Apr 21, 2010 at 12:02 PM, Sonny Heer wrote: > Currently running on a single node with intensive write operations. > > > After running for a while... > > Client starts outputting: > > TimedOutException() >        at > org

Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
Currently running on a single node with intensive write operations. After running for a while... Client starts outputting: TimedOutException() at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:12232) at org.apache.cassandra.thrift.Cassandra$Client.recv

Re: Heap sudden jump during import

2010-04-08 Thread Tatu Saloranta
On Wed, Apr 7, 2010 at 1:51 PM, Eric Evans wrote: > On Tue, 2010-04-06 at 10:55 -0700, Tatu Saloranta wrote: >> On Tue, Apr 6, 2010 at 12:15 AM, JKnight JKnight >> wrote: >> > When import, all data in json file will load in memory. So that, you >> can not >> &

Re: Heap sudden jump during import

2010-04-07 Thread Eric Evans
On Tue, 2010-04-06 at 10:55 -0700, Tatu Saloranta wrote: > On Tue, Apr 6, 2010 at 12:15 AM, JKnight JKnight > wrote: > > When import, all data in json file will load in memory. So that, you > can not > > import large data. > > You need to export large sstable file to

Re: Heap sudden jump during import

2010-04-06 Thread Tatu Saloranta
On Tue, Apr 6, 2010 at 12:15 AM, JKnight JKnight wrote: > When import, all data in json file will load in memory. So that, you can not > import large data. > You need to export large sstable file to many small json files, and run > import. Why would you ever read the whole file in

Re: Heap sudden jump during import

2010-04-06 Thread JKnight JKnight
When import, all data in json file will load in memory. So that, you can not import large data. You need to export large sstable file to many small json files, and run import. On Mon, Apr 5, 2010 at 5:26 PM, Jonathan Ellis wrote: > Usually sudden heap jumps involve compacting large r

Re: Heap sudden jump during import

2010-04-05 Thread Jonathan Ellis
; will freeze for long time (terrible latency, no response to nodetool that I > have to stop the import client ) before it comes back to normal . It's a > single node cluster with JVM maximum heap size of 3GB. So what could cause > this spike? What kind of tool can I use to find out

Re: Heap sudden jump during import

2010-04-03 Thread Weijun Li
umns (700bytes each) to > >> > Cassandra: > >> > the process ran smoothly for about 20mil then the heap usage suddenly > >> > jumped > >> > from 2GB to 3GB which is the up limit of JVM, --from this point > >> > Cassandra > >> &g

Re: Heap sudden jump during import

2010-04-03 Thread Benoit Perroud
ing a test to write 30 million columns (700bytes each) to >> > Cassandra: >> > the process ran smoothly for about 20mil then the heap usage suddenly >> > jumped >> > from 2GB to 3GB which is the up limit of JVM, --from this point >> > Cassandra >>

Re: Heap sudden jump during import

2010-04-03 Thread Weijun Li
dra: > > the process ran smoothly for about 20mil then the heap usage suddenly > jumped > > from 2GB to 3GB which is the up limit of JVM, --from this point Cassandra > > will freeze for long time (terrible latency, no response to nodetool that > I > > have to stop th

Re: Heap sudden jump during import

2010-04-03 Thread Benoit Perroud
y for about 20mil then the heap usage suddenly jumped > from 2GB to 3GB which is the up limit of JVM, --from this point Cassandra > will freeze for long time (terrible latency, no response to nodetool that I > have to stop the import client ) before it comes back to normal . It's a >

Heap sudden jump during import

2010-04-02 Thread Weijun Li
etool that I have to stop the import client ) before it comes back to normal . It's a single node cluster with JVM maximum heap size of 3GB. So what could cause this spike? What kind of tool can I use to find out what are the objects that are filling the additional 1GB heap? I did a heap dump but