this will work.I have tried both gave one day unique bucket.
I just realized, If I sync all clients to one zone then date will remain
same for all.
One Zone date will give materialize view to row.
On Mon, Apr 30, 2012 at 11:43 PM, samal wrote:
> hhmm. I will try both. thanks
>
>
> On Mon, Apr
I think it make's sense and would be happy if you can share the incremental
snapshot scripts.
Thanks!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956
On Tue, May 1, 2012 at 11:
Many thanks Aaron. I will post a support issue for them. But will keep the
snapshot + incremental backups + commitlogs to recover any failure
situation.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-backup-queston-regarding-commitlogs-
Hey,
There is a push to use Akamai IPA to accelerate traffic between our
Cassandra nodes. Ignoring all other complexities this introduces, is it
possible to use CNAMEs for broadcast addresses? I'm also assuming this
restricts us to using only the PropertyFileSnitch (since we are not
strictly in the
If you delete the commit logs you are rolling back to exactly what was in the
snapshot. When you take a snapshot it flushes the memtables first, so there is
nothing in the commit log that is not in the snapshot. Rolling back to a
snapshot is rollback to that point in time.
If you want to resto
I would try to avoid 100's on MB's per row. It will take longer to compact and
repair.
10's is fine. Take a look at in_memory_compaction_limit and thrift_frame_size
in the yaml file for some guidance.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpi
I would just try to copy instead of moving first, and dropping the old
CF or the not needed snapshot if necessary when everything is ok.
2012/5/1 Oleg Proudnikov :
> Benoit Perroud noisette.ch> writes:
>
>>
>> You can copy the sstables (renaming them accordingly) and
>> call nodetool refresh.
>>
Henrik Schröder gmail.com> writes:
> But what's the difference between doing an extra read from that
> One Big File, than doing an extra read from whatever SSTable
> happen to be largest in the course of automatic minor compaction?
There is this note regarding major compaction in the tuning gu
+1
On Tue, May 1, 2012 at 12:06 PM, Edward Capriolo wrote:
> Also there are some tickets in JIRA to impose a max sstable size and
> some other related optimizations that I think got stuck behind levelDB
> in coolness factor. Not every use case is good for leveled so adding
> more tools and optimi
Benoit Perroud noisette.ch> writes:
>
> You can copy the sstables (renaming them accordingly) and
> call nodetool refresh.
>
Thank you, Benoit.
In that case could I try snapshot+move&rename+refresh on a live system?
Regards,
Oleg
Also there are some tickets in JIRA to impose a max sstable size and
some other related optimizations that I think got stuck behind levelDB
in coolness factor. Not every use case is good for leveled so adding
more tools and optimizations of the Size Tiered tables would be
awesome.
On Tue, May 1, 2
On Tue, May 1, 2012 at 10:20 AM, Tim Wintle wrote:
> I believe that the general design for time-series schemas looks
> something like this (correct me if I'm wrong):
>
> (storing time series for X dimensions for Y different users)
>
> Row Keys: "{USET_ID}_{TIMESTAMP/BUCKETSIZE}"
> Columns: "{DIME
The point with NoSQL is flexibility and RDBMS is structure and guarantees.
Both patterns IMHO do overlap. But they do have different USPs.
On Mon, Apr 30, 2012 at 3:51 AM, Maxim Potekhin wrote:
> About a year ago I started getting a strange feeling that
> the noSQL community is busy re-creating
I believe that the general design for time-series schemas looks
something like this (correct me if I'm wrong):
(storing time series for X dimensions for Y different users)
Row Keys: "{USET_ID}_{TIMESTAMP/BUCKETSIZE}"
Columns: "{DIMENSION_ID}_{TIMESTAMP%BUCKETSIZE}" -> {Counter}
But I've not fou
On Tue, May 1, 2012 at 4:31 AM, Henrik Schröder wrote:
> But what's the difference between doing an extra read from that One Big
> File, than doing an extra read from whatever SSTable happen to be largest in
> the course of automatic minor compaction?
The primary differences, as I understand it,
!! Without any guarantee. I know it works but I never used this in production !!
You can copy the sstables (renaming them accordingly) and call nodetool refresh.
Don't forget to create your column family CF2 before.
2012/5/1 Oleg Proudnikov :
> Hello,
>
> Is it possible to create an exact repli
Hi,
I'm having problems in my Cassandra/Hadoop (1.0.8 + cdh3u3) cluster related to
how cassandra splits the data to be processed by Hadoop.
I'm currently testing a map reduce job, starting from a CF of roughly 1500
rows, with
cassandra.input.split.size 10
cassandra.range.batch.size 1
but what
On Mon, Apr 30, 2012 at 6:48 PM, Jonathan Ellis wrote:
> On Mon, Apr 30, 2012 at 7:49 PM, Cord MacLeod wrote:
>> Hello group,
>>
>> I'm a new Cassandra and Java user so I'm still trying to get my head around
>> a few things. If you've disabled swap on a machine what is the reason to
>> use JNA
Hello,
Is it possible to create an exact replica of a CF by these steps?
1. Take a snapshot
2. Isolate sstables for CF1
3. Rename sstables into CF2
4. Bulk load renamed sstables into newly created CF2 within the same Keyspace
Or would you suggest using sstable2json instead?
Thank you very much,
I wonder if TieredMergePolicy [1] could be used in Cassandra for compaction?
1.
http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html
On Tue, May 1, 2012 at 6:38 AM, Edward Capriolo wrote:
> Henrik,
>
> There are use cases where major compaction works well like yours an
unsubscribe
Henrik,
There are use cases where major compaction works well like yours and
mine. Essentially cases with a high amount of churn, updates and
deletes we get a lot of benefit from forced tombstone removal in the
form of less physical data.
However we end up with really big sstables that naturally
But what's the difference between doing an extra read from that One Big
File, than doing an extra read from whatever SSTable happen to be largest
in the course of automatic minor compaction?
We have a pretty update-heavy application, and doing a major compaction can
remove up to 30% of the used di
https://issues.apache.org/jira/browse/CASSANDRA-4206
Regards,
Patrik
On Tue, May 1, 2012 at 03:46, Jonathan Ellis wrote:
> On Mon, Apr 30, 2012 at 2:11 PM, Patrik Modesto
> wrote:
>> I think the problem is somehow connected to an IntegerType secondary
>> index.
>
> Could be, but my money is on
Thank you Aaron.
That explanation cleared things up.
2012/4/30 aaron morton :
> Depends on your definition of significantly, there are a few things to
> consider.
>
> * Reading from SSTables for a request is a serial operation. Reading from 2
> SSTables will take twice as long as 1.
>
> * If the d
On another thought I am writing a code/script for taking a backup of all the
nodes in a single DC , renaming data files with some uid and then merging them
. The storage however would happen on some storage medium nas for ex which
would be in the same DC. This would help in data copying a non he
26 matches
Mail list logo