I think I rather wait until I'll be able to upgrade the current cluster and
then make the migration.
Thanks!
On Thu, Jan 9, 2014 at 8:41 PM, Robert Coli wrote:
> On Thu, Jan 9, 2014 at 6:54 AM, Or Sher wrote:
>
>> I want to use sstableloader in order to load 1.0.9 data to a 2.0.*
>> cluster.
>
On Thu, Jan 9, 2014 at 6:54 AM, Or Sher wrote:
> I want to use sstableloader in order to load 1.0.9 data to a 2.0.* cluster.
> I know that the sstable format is incompatible between the two versions.
> What are my options?
>
http://www.palominodb.com/blog/2012/09/25/bulk-loading-options-cassandr
Thanks for answers. It went quite well. Note what Aaron writes about sstable
names, as I did the job before his mail, and changed one name wrong :-) - and
that caused some troubles ( a lot of missing file errors )- i think that was to
blame for some counter cf being messed up. As it was not imp
Sounds about right, i've done similar things before.
Some notes…
* I would make sure repair has completed on the source cluster before making
changes. I just like to know data is distributed. I would also do it once all
the moves are done.
* Rather than flush, take a snap shot and copy from t
to get it "correct", meaning consistent, it seems you will need to do
a repair no matter what since the source cluster is taking writes
during this time and writing to commit log. so to avoid filename
issues just do the first copy and then repair. i am not sure if they
can have any filename.
to
On Sun, Mar 20, 2011 at 4:42 PM, aaron morton wrote:
> When compacting it will use the path with the greatest free space. When
> compaction completes successfully the files will lose their temporary status
> and that will be their new home.
>
> On 18 Mar 2011, at 14:10, John Lewis wrote:
>
>> |
When compacting it will use the path with the greatest free space. When
compaction completes successfully the files will lose their temporary status
and that will be their new home.
Aaron
On 18 Mar 2011, at 14:10, John Lewis wrote:
> | data_file_directories makes it seem as though cassandra ca
Thanks Maki :)
I copied the existing var folder to the new hardisk
and changes the path to the data directories in the storage-config.xml
I was successfully able to connect with cassandra and read the data that was
shifted to the new location.
On Fri, Mar 18, 2011 at 6:33 AM, Maki Watanabe
| data_file_directories makes it seem as though cassandra can use more than one
location for sstable storage. Does anyone know how it splits up the data
between partitions? I am trying to plan for just about every worst case
scenario I can right now, and I want to know if I can change the config
Refer to:
http://wiki.apache.org/cassandra/StorageConfiguration
You can specify the data directories with following parameter in
storage-config.xml (or cassandra.yaml in 0.7+).
commit_log_directory : where commitlog will be written
data_file_directories : data files
saved_cache_directory : saved
FWIW, I'm working on migrating a large amount of data out of Oracle into my
test cluster. The data has been warehoused as CSV files on Amazon S3. Having
that in place allows me to not put extra load on the production service when
doing many repeated tests. I then parse the data using CSV Python mo
I'm afraid there is no short answer.
The long answer is,
1) Read about Cassandra data modeling at
http://wiki.apache.org/cassandra/ArticlesAndPresentations. It is not
as simple as "one table equals one columnfamily."
2) Write a program to read your data out of SQL Server and write it
into Cassan
12 matches
Mail list logo