Hello.
For me " there are no dirty column families" in your message tells it's
possibly the same problem.
The issue is that column families that gets full row deletes only do not
get ANY SINGLE dirty byte accounted and so can't be picked by flusher.
Any ratio can't help simply because it is mu
Does any one have idea for this? Thanks~
2012/4/24 马超
> Hi all,
>
> I have some troubles of cassandra in my production:
>
> I build up a RPC server which using hector client to manipulate the
> cassandra. Wired things happen nowadays: the latency of RPC sometimes
> becames very high (10seconds~7
If you set trace level for IncomingTCPConnection, the message "Version
is now ..." will be printed for every inter-cassandra message received
by the node, including Gossip.
Enabling this log in high traffic will saturate IO for your log disk by itself.
You should better to inspect nodetool tpstats
Thank you Aaron.
On Mon, Apr 23, 2012 at 2:39 PM, aaron morton wrote:
> No.
>
> CounterColumnType only works with column values, which are not sorted.
> Sorting counters while they are being updated is potentially very
> expensive.
>
> You have a few options:
>
> 1) If the list of counters is sho
Hi,
Is it possible to perform a search in supercolumn metadata?
Supercolumn do not accept secondary index sothat is not possible?
--
Juan Ezquerro LLanes
De : mdione@orange.com [mailto:mdione@orange.com]
> [default@avatars] describe HBX_FILE;
> ColumnFamily: HBX_FILE
> Key Validation Class: org.apache.cassandra.db.marshal.BytesType
> Default column value validator:
> org.apache.cassandra.db.marshal.BytesType
> Columns s
I read a while ago that a "compaction" would rebuild the index. You can
trigger this by running "repair" with the nodetool.
2012/4/24
> De : mdione@orange.com [mailto:mdione@orange.com]
> > [default@avatars] describe HBX_FILE;
> > ColumnFamily: HBX_FILE
> > Key Validation Class
At least for TimeUUIDs, this email I sent to client-dev@ a couple of weeks
ago should help to explain things:
http://www.mail-archive.com/client-dev@cassandra.apache.org/msg00125.html
Looking at the linked pycassa code might be the most useful thing.
On Tue, Apr 24, 2012 at 1:46 AM, Drew Kutchari
Hi,
I'm experiencing losing a part of key cache on restart on Cassandra 1.0.7.
For example:
- cfstats reports key cache size of 13,040,502 with capacity of 15,000,000.
- Cassandra log reports 12,955,585 of them have been saved on the last save
events.
- On restart Cassandra reads saved cache.
- c
The Cassandra team is very pleased to announce the release of Apache Cassandra
version 1.1.0. Cassandra 1.1.0 is a new major release for the Apache Cassandra
distributed database. This version adds numerous improvements[1,2], amongst
which:
- Schema updates have been reworked and conflict from
Thank you very much for your replay~
The full message like this:
DEBUG [Thread-6] 2012-04-24 21:04:11,024 IncomingTcpConnection.java (line
116) Version is now 3
During "blocking time" I only saw this message(*appendix shown) *and seems
everything is blocked.
I logged all cassandra calling time i
The tpstats shows there was no read request pending on this node. Maybe you
should have a look at the other nodes first.
But my suggestion is to upgrade hector to 1.0.x if possible. Hector 0.7 is
for cassandra 0.7. Just FYI, we had some issues when upgrading cassandra
from 0.8 to 1.0. And the prob
I will start with a hypothetical scenario to make my question easy to
comprehend.
Time 1: Client A updates Row 1 in CF C. N=3, W=1
Time 2: Client A reads Row1 in CF C. N=3, R=1
Can we expect the update to be seen by all replicas and reply consistently
in T2?
I presume Cassandra queues job messa
On Tue, Apr 24, 2012 at 8:13 AM, wrote:
> Sent from my Galaxy S2
This won't work, even from a Galaxy S2.
Try http://wiki.apache.org/cassandra/FAQ#unsubscribe instead.
--
Eric Evans
Acunu | http://www.acunu.com | @acunu
OK, I wil try ot update the cassandra to 1.0.9 or 1.1.0 and update hector
to 1.0.x
Hope it could rescue me.
Thanks a lot~~
2012/4/24 Ji Cheng
> The tpstats shows there was no read request pending on this node. Maybe
> you should have a look at the other nodes first.
>
> But my suggestion is to
thanks for the email and the comic reply.
Cheers
Jason
On Wed, Apr 25, 2012 at 12:41 AM, Eric Evans wrote:
> On Tue, Apr 24, 2012 at 8:13 AM, wrote:
>> Sent from my Galaxy S2
>
> This won't work, even from a Galaxy S2.
>
> Try http://wiki.apache.org/cassandra/FAQ#unsubscribe instead.
>
> --
>
Thanks Tyler. So have you actually tried this with Cassandra?
On Apr 24, 2012, at 5:44 AM, Tyler Hobbs wrote:
> At least for TimeUUIDs, this email I sent to client-dev@ a couple of weeks
> ago should help to explain things:
> http://www.mail-archive.com/client-dev@cassandra.apache.org/msg0012
Yes, I have tested it.
On Tue, Apr 24, 2012 at 12:08 PM, Drew Kutcharian wrote:
> Thanks Tyler. So have you actually tried this with Cassandra?
>
>
>
> On Apr 24, 2012, at 5:44 AM, Tyler Hobbs wrote:
>
> At least for TimeUUIDs, this email I sent to client-dev@ a couple of
> weeks ago should help
Thanks. So looking at the code, to get the lowest possible TimeUUID value using
your function I should just call convert_time_to_uuid(0) ?
On Apr 24, 2012, at 10:15 AM, Tyler Hobbs wrote:
> Yes, I have tested it.
>
> On Tue, Apr 24, 2012 at 12:08 PM, Drew Kutcharian wrote:
> Thanks Tyler. So
Oh, I just realized that you're asking about the lowest TimeUUID *overall*,
not just for a particular timestamp. Sorry.
The lowest possible TimeUUID is '--1000-8080-808080808080'.
The highest is '--1fff-bf7f-7f7f7f7f7f7f'.
On Tue, Apr 24, 2012 at 12:47 PM, Drew Kutcharian
Hello Aaron,
it's probably the over-optimistic number of concurrent compactors that
was tripping the system.
I do not entirely understand what's the correlation here, maybe it's
that the compactors were overloading
the neighboring nodes causing time-outs. I tuned the concurrency down
and aft
Nice, that's exactly what I was looking for.
On Apr 24, 2012, at 11:21 AM, Tyler Hobbs wrote:
> Oh, I just realized that you're asking about the lowest TimeUUID *overall*,
> not just for a particular timestamp. Sorry.
>
> The lowest possible TimeUUID is '--1000-8080-808080808080'.
Hi there,
we just noticed that cassandra is currently published with inconsistent
dependencies. The inconsistencies exist between the published pom and
the published distribution (tar.gz). I compared hashes of the libs of
several versions and the inconsistencies are different each time.
Howeve
I am running 1.0.8. I am adding a new data center to an existing cluster.
Following steps outlined in another thread on the mailing list, things went
fine except for the last step, which is to run repair on all the nodes in
the new data center. Repair seems to be hanging indefinitely. There is n
I just followed the step outlined in this email thread to add a second data
center to my existing cluster. I am running 1.0.8. Each data center has a
replication factor of 2. I am using local quorum for read and write.
Everything went smoothly until I ran the last step, which is to run
nodetool
The secondary index will be build using the compaction features, you can check
the progress with nodetool compactionstats
When they are build the output from describe… will list the build indexes…
> Built indexes: []
Hope that helps.
-
Aaron Morton
Freelance Developer
@aaronmor
No, you cannot run secondary indexes on super columns.
Often a design that uses super columns can be expressed using CompositeColumns.
Which would allow you to use secondary indexes.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 24/04/
I've not looked into the CASSANDRA-3721 ticket but…
If you reduce the yaml config setting commitlog_total_space_in_mb you can get
similar behaviour to the old memtable_flush_* setting the flushed every CF
after X minutes.
Not pretty but it may work in this case.
Cheers
-
Aar
> - Cassandra log reports 12,955,585 of them have been saved on the last save
> events.
Has their been much activity between saves ?
Nothing jumps out. There is a setting for the max entries to store, but this
only applies to the row cache. Can you reproduce issue in a dev environment ?
When ru
I agree with your observations.
>From another hand I found that ColumnFamily.size() doesn't calculate object
size correctly. It doesn't count two fields Objects sizes and returns 0 if
there is no object in columns container.
I increased initial size variable value to 24 which is size of two
objects
Glad you've got it working properly. I've tried to make as "local" changes
as possible, so changed only single value calculation. But it's possible
your way is better and will be accepted by cassandra maintainer. Could you
attach your patch to the ticket. I'd like for any fix to be applied to the
t
31 matches
Mail list logo