On Tue, Feb 7, 2012 at 10:45 AM, aaron morton wrote:
> Just to ask the stupid question, have you tried setting it really high ?
> Like 50 ?
>
No I have not. I moved to mmap_index_only as a stopgap solution.
Is it possible for there to be that many mmaps for about 300 db files?
--
Regards,
Just to ask the stupid question, have you tried setting it really high ? Like
50 ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 7/02/2012, at 10:27 AM, Ajeet Grewal wrote:
> Here are the last few lines of strace (of one of the thr
Here are the last few lines of strace (of one of the threads). There
are a bunch of mmap system calls. Notice the last mmap call a couple
of lines before the trace ends. Could the last mmap call fail?
== BEGIN STRACE ==
mmap(NULL, 2147487599, PROT_READ, MAP_SHARED, 37, 0xbb000) = 0x7709b54000
On Mon, Feb 6, 2012 at 11:50 AM, Ajeet Grewal wrote:
> On Sat, Feb 4, 2012 at 7:03 AM, Jonathan Ellis wrote:
>> Sounds like you need to increase sysctl vm.max_map_count
>
> This did not work. I increased vm.max_map_count from 65536 to 131072.
> I am still getting the same error.
The number of fi
On Sat, Feb 4, 2012 at 7:03 AM, Jonathan Ellis wrote:
> Sounds like you need to increase sysctl vm.max_map_count
This did not work. I increased vm.max_map_count from 65536 to 131072.
I am still getting the same error.
ERROR [SSTableBatchOpen:4] 2012-02-06 11:43:50,463
AbstractCassandraDaemon.jav
Sounds like you need to increase sysctl vm.max_map_count
On Fri, Feb 3, 2012 at 7:27 PM, Ajeet Grewal wrote:
> Hey guys,
>
> I am getting an out of memory (mmap failed) error with Cassandra
> 1.0.2. The relevant log lines are pasted at
> http://pastebin.com/UM28ZC1g.
>
> Cassandra works fine unti
Hey guys,
I am getting an out of memory (mmap failed) error with Cassandra
1.0.2. The relevant log lines are pasted at
http://pastebin.com/UM28ZC1g.
Cassandra works fine until it reaches about 300-400GB of load (on one
instance, I have 12 nodes RF=2). Then nodes start failing with such
errors. Th