Check this out:
http://www.datastax.com/docs/1.0/install/upgrading#upgrading-between-minor-releases-of-cassandra-1-0-x
Cheers
Am 11.03.2012 um 07:42 schrieb Tamar Fraenkel :
Hi!
I want to experiment with upgrading. Does anyone have a good link on how to
upgrade Cassandra?
Thanks,
*Tamar Frae
After more than 9 hours, I've restart the node and reused the join
command (data+cache+commitlog have not been erased) and now the node is
in normal state in less than a second :
nodetool -h localhost ring
Address DC RackStatus State Load
OwnsToken
Hi!
I need some advise:
I have user CF, which has a UUID key which is my internal user id.
One of the column is facebook_id of the user (if exist).
I need to have the reverse mapping from facebook_id to my UUID.
My intention is to add a CF for the mapping from Facebook Id to my id:
user_by_fbid =
Thanks! will check it in the following days :)
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956
On Sun, Mar 11, 2012 at 10:00 AM, Marcel Steinbach wrote:
> Check this out:
>
>
Either you do that or you could think about using a secondary index on the
fb user name in your primary cf.
See http://www.datastax.com/docs/1.0/ddl/indexes
Cheers
Am 11.03.2012 um 09:51 schrieb Tamar Fraenkel :
Hi!
I need some advise:
I have user CF, which has a UUID key which is my internal u
Hi!
Thanks for the response.
>From what I read, secondary indices are good only for columns with few
possible values. Is this a good fit for my case? I have unique facebook id
for every user.
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Hi All,
I am using TTL 3 hours and GC grace 0 for a CF. I have a normal CF that has
records with TTL 3 hours and I dont send any delete request. I just wonder
if using GC grace 0 will cause any problem except extra Memory/IO/network
load. I know that gc grace is for not transferring deleted record
One thing you may want to look at is the meanRowSize from nodetool
cfstats and your compression block size. In our case the mean
compacted size is 560 bytes and 64KB block size caused CPU tickets and
a lot of short lived memory. I have brought by block size down to 16K.
The result tables are not no
Using cassandra-1.0.6 one node fails to start.
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.utils.obs.OpenBitSet.(OpenBitSet.java:104)
at org.apache.cassandra.utils.obs.OpenBitSet.(OpenBitSet.java:92)
at
org.apache.cassandra.utils.BloomFilterSerializ
> I am using TTL 3 hours and GC grace 0 for a CF. I have a normal CF that has
> records with TTL 3 hours and I dont send any delete request. I just wonder
> if using GC grace 0 will cause any problem except extra Memory/IO/network
> load. I know that gc grace is for not transferring deleted records
> How did this this bloom filter get too big?
Bloom filters grow with the amount of row keys you have. It is natural
that they grow bigger over time. The question is whether there is
something "wrong" with this node (for example, lots of sstables and
disk space used due to compaction not running,
On Sun, 2012-03-11 at 15:06 -0700, Peter Schuller wrote:
> If it is legitimate use of memory, you *may*, depending on your
> workload, want to adjust target bloom filter false positive rates:
>
>https://issues.apache.org/jira/browse/CASSANDRA-3497
This particular cf has up to ~10 billion row
> This particular cf has up to ~10 billion rows over 3 nodes. Each row is
With default settings, 143 million keys roughly gives you 2^31 bits of
bloom filter. Or put another way, you get about 1 GB of bloom filters
per 570 million keys, if I'm not mistaken. If you have 10 billion
rows, that should
On Sun, 2012-03-11 at 15:36 -0700, Peter Schuller wrote:
> Are you doing RF=1?
That is correct. So are you calculations then :-)
> > very small, <1k. Data from this cf is only read via hadoop jobs in batch
> > reads of 16k rows at a time.
> [snip]
> > It's my understanding then for this use cas
I'm having difficulties with leveled compaction, it's not making fast
enough progress. I'm on a quad-core box and it only does one compaction
at a time. Cassandra version: 1.0.6. Here's nodetool compaction stats:
# nodetool -h localhost compactionstats
pending tasks: 2568
compaction type
> multithreaded_compaction: false
Set to true.
--
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)
On 3/11/2012 9:17 PM, Peter Schuller wrote:
>> multithreaded_compaction: false
> Set to true.
I did try that. I didn't see it go any faster. The cpu load was lower,
which I assumed meant fewer bytes/sec being compressed
(SnappyCompressor). I didn't see multiple compactions in parallel.
Nodetool com
17 matches
Mail list logo