Done it. Now it generally runs ok, till one of the nodes get's stuck with
100% cpu and I need to reboot it.
Last lines in the system.log just before are:
INFO [OptionalTasks:1] 2012-03-13 07:36:43,850 MeteredFlusher.java (line
62) flushing high-traffic column family CFS(Keyspace='tok',
ColumnFami
Thanks!
Better than mine, as it considered later additions of services!
Will update my code,
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956
On Mon, Mar 12, 2012 at 11:
Just ignore it: https://issues.apache.org/jira/browse/CASSANDRA-3955
On Mon, Mar 12, 2012 at 9:31 PM, Roshan wrote:
> Hi
>
> I have upgrade our development Cassandra cluster (2 nodes) from 1.0.6 to
> 1.0.8 version.
>
> After upgrade to 1.0.8 version, one node keep trying to send hints every 10
>
> > > > It's my understanding then for this use case that bloom filters are of
> > > > little importance and that i can
Ok. To summarise our actions to get us out of this situation, in hope
that it may help others one day, we did the following actions:
1) upgrade to 1.0.7
2) set fp_ratio=0.99
It's hard to answer this question because there are whole bunch of
operations which may cause disk usage growth - repair, compaction, move
etc. Any combination of these operations will make things only worse.
But let's assume that in your case the only operation increasing disk
usage was "move"
On Mon, Mar 12, 2012 at 4:44 AM, aaron morton wrote:
> I don't understand why I
> don't get multiple concurrent compactions running, that's what would
> make the biggest performance difference.
>
> concurrent_compactors
> Controls how many concurrent compactions to run, by default it's the number
Cassandra v1.0.8
once again: 4-nodes cluster, RF = 3.
On 12.03.2012 16:18, Rustam Aliyev wrote:
What version of Cassandra do you have?
On 12/03/2012 11:38, Vanger wrote:
We were aware of compaction overhead, but still don't understand why
that shall happened: node 'D' was in stable condition,
What version of Cassandra do you have?
On 12/03/2012 11:38, Vanger wrote:
We were aware of compaction overhead, but still don't understand why
that shall happened: node 'D' was in stable condition, works for at
least month, had all data for its token range and was comfortable with
such disk sp
We were aware of compaction overhead, but still don't understand why
that shall happened: node 'D' was in stable condition, works for at
least month, had all data for its token range and was comfortable with
such disk space.
Why suddenly node needs 2x more space for data it already have? Why
de
Hi,
If you use SizeTieredCompactionStrategy, you should have x2 disk space
to be on the safe side. So if you want to store 2TB data, you need
partition size of 4TB at least. LeveledCompactionStrategy is available
in 1.x and supposed to require less free disk space (but comes at price
of I/O)
I don't know if it can helps, but the only thing I see on cluster's
nodes is :
==> /var/log/cassandra/output.log <==
INFO 10:57:28,530 InetAddress /10.0.1.70 is now dead.
when I try to join the node 10.0.1.70 to the cluster
On 3/12/12 11:27 AM, Cyril Scetbon wrote:
It's done.
Nothing new o
Modify this line the log4j-server.properties. It will normally be located in
/etc/cassandra
https://github.com/apache/cassandra/blob/trunk/conf/log4j-server.properties#L21
Change INFO to DEBUG
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
> I don't understand why I
> don't get multiple concurrent compactions running, that's what would
> make the biggest performance difference.
concurrent_compactors
Controls how many concurrent compactions to run, by default it's the number of
cores on the machine.
If you are not CPU bound check i
>>> It's my understanding then for this use case that bloom filters are of
>>> little importance and that i can
>>
Yes.
AFAIK there is only one position seek (that will use the bloom filter) at the
start of a get_range_slice request. After that the iterators step over the rows
in the -Data file
*We have cassandra 4 nodes cluster* with RF = 3 (nodes named from 'A' to
'D', initial tokens:
*A (25%)*: 20543402371996174596346065790779111550, *
B (25%)*: 63454860067234500516210522518260948578,
*C (25%)*: 106715317233367107622067286720208938865,
*D (25%)*: 1501411834604692317316873037158841057
Alternate would be to add another row to your user CF specific for Facebook
ids. Column ID would be the Facebook identifier and value would be your
internal uuid.
Consider when you want to add another service like twitter. Will you then
add another CF per service or just another row specific now
In this case, where you know the query upfront, I add a custom secondary index
using another CF to support the query. It's a little easier here because the
data wont change.
UserLookupCF (using composite types for the key value)
row_key: e.g. "facebook:12345" or "twitter:12345"
col_name : e.g
On 3/12/12 9:50 AM, aaron morton wrote:
It may be the case that the joining node does not have enough
information. But there is a default 30 second delay while the node
waits for the ring information to stabilise.
What version are you using ?
1.0.7
Next time you add a new node can you try i
Thank you for the swift response.
Cem.
On Sun, Mar 11, 2012 at 11:03 PM, Peter Schuller <
peter.schul...@infidyne.com> wrote:
> > I am using TTL 3 hours and GC grace 0 for a CF. I have a normal CF that
> has
> > records with TTL 3 hours and I dont send any delete request. I just
> wonder
> > if
It may be the case that the joining node does not have enough information. But
there is a default 30 second delay while the node waits for the ring
information to stabilise.
What version are you using ?
Next time you add a new node can you try it with logging set the DEBUG. If you
get the er
If it's a Hector thing you may have better luck on the Hector user group.
http://groups.google.com/group/hector-users
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 10/03/2012, at 8:33 AM, Daning Wang wrote:
> Thanks Maciej. we have defa
21 matches
Mail list logo