On Mon, 18 Jun 2012 11:57:17 -0700, Gurpreet Singh wrote:
> Thanks for all the information Holger.
>
> Will do the jvm updates, kernel updates will be slow to come by. I see
> that with disk access mode standard, the performance is stable and better
> than in mmap mode, so i will probably stick t
Soory i mistaken,here is right string
INFO [main] 2012-06-14 02:03:14,520 CLibrary.java (line 109) JNA
mlockall successful
2012/6/15 ruslan usifov :
> 2012/6/14 Gurpreet Singh :
>> JNA is installed. swappiness was 0. vfs_cache_pressure was 100. 2 questions
>> on this..
>> 1. Is there a way to
2012/6/14 Gurpreet Singh :
> JNA is installed. swappiness was 0. vfs_cache_pressure was 100. 2 questions
> on this..
> 1. Is there a way to find out if mlockall really worked other than just the
> mlockall successful log message?
yes you must see something like this (from our test server):
INFO [
Upgrade java (version 1.6.21 have memleaks) to latest 1.6.32. Its
abnormally that on 80Gigs you have 15Gigs of index
vfs_cache_pressure - used for inodes and dentrys
Also to check that you have memleaks use drop_cache sysctl
2012/6/14 Gurpreet Singh :
> JNA is installed. swappiness was 0. vf
JNA is installed. swappiness was 0. vfs_cache_pressure was 100. 2 questions
on this..
1. Is there a way to find out if mlockall really worked other than just the
mlockall successful log message?
2. Does cassandra only mlock the jvm heap or also the mmaped memory?
I disabled mmap completely, and th
I would check /etc/sysctl.conf and get the values of
/proc/sys/vm/swappiness and /proc/sys/vm/vfs_cache_pressure.
If you don't have JNA enabled (which Cassandra uses to fadvise) and
swappiness is at its default of 60, the Linux kernel will happily swap out
your heap for cache space. Set swappines
Hm, it's very strange what amount of you data? You linux kernel
version? Java version?
PS: i can suggest switch diskaccessmode to standart in you case
PS:PS also upgrade you linux to latest, and javahotspot to 1.6.32
(from oracle site)
2012/6/13 Gurpreet Singh :
> Alright, here it goes again...
>
Alright, here it goes again...
Even with mmap_index_only, once the RES memory hit 15 gigs, the read
latency went berserk. This happens in 12 hours if diskaccessmode is mmap,
abt 48 hrs if its mmap_index_only.
only reads happening at 50 reads/second
row cache size: 730 mb, row cache hit ratio: 0.75
Aaron, Ruslan,
I changed the disk access mode to mmap_index_only, and it has been stable
ever since, well at least for the past 20 hours. Previously, in abt 10-12
hours, as soon as the resident memory was full, the client would start
timing out on all its reads. It looks fine for now, i am going to
2012/6/8 aaron morton :
> Ruslan,
> Why did you suggest changing the disk_access_mode ?
Because this bring problems on empty seat, in any case for me mmap
bring similar problem and i doesn't have find any solution to resolve
it, only change disk_access_mode:-((. For me also will be interesting
he
Ruslan,
Why did you suggest changing the disk_access_mode ?
Gurpreet,
I would leave the disk_access_mode with the default until you have a
reason to change it.
> > 8 core, 16 gb ram, 6 data disks raid0, no swap configured
is swap disabled ?
> Gradually,
> > the system cpu bec
Thanks Ruslan.
I will try the mmap_index_only.
Is there any guideline as to when to leave it to auto and when to use
mmap_index_only?
/G
On Fri, Jun 8, 2012 at 1:21 AM, ruslan usifov wrote:
> disk_access_mode: mmap??
>
> set to disk_access_mode: mmap_index_only in cassandra yaml
>
> 2012/6/8 Gur
disk_access_mode: mmap??
set to disk_access_mode: mmap_index_only in cassandra yaml
2012/6/8 Gurpreet Singh :
> Hi,
> I am testing cassandra 1.1 on a 1 node cluster.
> 8 core, 16 gb ram, 6 data disks raid0, no swap configured
>
> cassandra 1.1.1
> heap size: 8 gigs
> key cache size in mb: 800 (us
Hi,
I am testing cassandra 1.1 on a 1 node cluster.
8 core, 16 gb ram, 6 data disks raid0, no swap configured
cassandra 1.1.1
heap size: 8 gigs
key cache size in mb: 800 (used only 200mb till now)
memtable_total_space_in_mb : 2048
I am running a read workload.. about 30 reads/second. no writes at
14 matches
Mail list logo