not scale because of tombstone management. Use hornetQ,
> its amazingly fast broker but it has quite slow persistence if you want to
> create queues significantly larger then your memory and use selectors for
> searching for specific messages in them.
>
> My point is for implementing queue message broker is what you want.
>
>
>
--
Best regards,
Vitalii Tymchyshyn
not
dirty memtables. So, what are cassandra memory requirement? Is it 1% or
2% of disk data? Or may be I am doing something wrong?
Best regards, Vitalii Tymchyshyn
03.01.12 20:58, aaron morton написав(ла):
The DynamicSnitch can result in less read operations been sent to a
node, but as long as
set to 4% to cut down memory needed.
I've raised index sampling and bloom filter setting seems not to be on
trunk yet. For me memtables is what's eating heap :(
Best regards, Vitalii Tymchyshyn.
for reads. Everything is a
lot smoother : no more timeouts.
I'd better reduce mutation thread pool with concurrent_writes setting.
This will lower server load no matter, how many clients are sending
batches, at the same time you still have good batching.
Best regards, Vitalii Tymchyshyn
f tables but is it also bad to have hundreds of
column families in cassandra? thank you.
As far as I can see, this may raise memory requirements for you,
since you need to have index/bloom filter for each column family
in memory.
--
Best regards,
Vitalii Tymchyshyn
. 2012 15:17, "Vitalii Tymchyshyn" <mailto:tiv...@gmail.com>> a écrit :
05.01.12 22:29, Philippe написав(ла):
Then I do have a question, what do people generally use as
the batch size?
I used to do batches from 500 to 2000 like you do.
After investiga
Hello.
What's in the logs? It should output something like "Hey, you've got
most of your memory used. I am going to flush some of memtables". Sorry,
I don't remember exact spelling, but it's gong from GC, so it should be
greppable by "GC".
25.01.12 16:26, Matthew Trinneer написав(ла):
Hello
1 collections, 190838344 used; max is 8547991552
INFO [ScheduledTasks:1] 2012-01-25 15:00:21,693 GCInspector.java (line 123)
GC for ConcurrentMarkSweep: 389 ms for 1 collections, 28748448 used; max is
8547991552
On 2012-01-25, at 10:09 AM, Vitalii Tymchyshyn wrote:
Hello.
What's
That's once in few days, so I don't think it's too important. Especially
since 0.77 is much better than 0.99 I've seen sometimes :)
26.01.12 02:49, aaron morton написав(ла):
You are running into GC issues.
WARN [ScheduledTasks:1] 2012-01-22 12:53:42,804 GCInspector.java
(line 146) Heap is 0.7
Hello.
Any messages about GC earlier in the logs? Cassandra server monitors
memory and starts complaining in advance if memory gets full.
Any chance you've got a full key delete-only scenario for some column
families? Cassandra has a bug not being able to flush such memtables.
I've filled a bu
Hello.
From my experience it's unwise to make many column families for same
keys because you will have bloom filters and row indexes multiplied. If
you have 5000, you should expect your heap requirements multiplied by
same factor. Also check your cache sizes. Default AFAIR is 10 keys
per
Hello.
There is also a primary row index. It's space can be controlled with
index_interval setting. Don't know if you can look for it's memory usage
somewhere. If I where you, I'd take jmap tool and examine heap histogram
first, heap dump second.
Best regards, Vitalii
To note: I still have the problem in before beta 1.1 custom build that
seems to have the fix. I am going to upgrade to 1.1 beta and check if
problem will go away and will file a bug if problem still exists.
BTW: It would be great for cassandra to exit on any fatal errors, like
assertion problems
ing of one or two
memtable by Cassandra is not helping.
Question: How many memtables are flushed when
memtable_total_space_in_mb is exceeded ? Any way to flush all
memtables when the threshold is reached ?
Thanks.
On Wed, Mar 21, 2012 at 8:56 AM, Vitalii Tymchyshyn wrote:
Hello.
There is also
Note that having tons of TCP connections is not good. We are using async
client to issue multiple calls over single connection at same time. You
can do the same.
Best regards, Vitalii Tymchyshyn.
03.04.12 16:18, Jeff Williams написав(ла):
Ok, so you think the write speed is limited by the
).
What can I do to make cassandra stop dying?
Why it can't free the memory?
Any ideas?
Thank you.
--
Best regards,
Vitalii Tymchyshyn
x27;s
not here yet.
Best regards, Vitalii Tymchyshyn
13.06.12 02:27, crypto five ???(??):
It would be really great to look at your slides. Do you have any plans
to share your presentation?
On Sat, Jun 9, 2012 at 1:14 AM, ??? <mailto:tiv...@gmail.com>> wrote:
vel.
>
> In this case, it means that if there is a network split between the 2
> datacenters, it is impossible to get the quorum, and all connections will
> be rejected.
>
> Is there a reason why Cassandra uses the Quorum consistency level ?
> Maybe a local_quorum conssitency level (or a one consistency level) could
> do the job ?
>
> Regards
> Jean Armel
>
>
>
--
Best regards,
Vitalii Tymchyshyn
Well, a schema've just came to my mind, that looks interesting, so I want
to share:
1) Actions are introduced. Each action receives unique I'd at coordinator
node. Client can ask for a block of ids beforehand, to make actions
idempotent.
2) Actions are applied to given row+column value. It's possib
We were successfully using a sync thrift client. With it we could send
multiple requests through the single connection and wait for answers.
17 трав. 2013 02:51, "aaron morton" напис.
> We don't have cursors in the RDBMS sense of things.
>
> If you are using thrift the recommendation is to use co
; -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 18/05/2013, at 9:57 PM, Vitalii Tymchyshyn wrote:
>
> We were successfully using a sync thrift client. With it we could send
> mu
21 matches
Mail list logo