Sorry, how much disk space is actually used by the tables, indexes, etc
involved in your queries? Or it that's a bit much to get, how much disk
space is occupied by your database in total?
A more simple "overview" might be "numactl —hardware”
> It returns the following output:
>
> sh-4.3# numactl --hardware
> available: 2 nodes (0-1)
> node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
> node 0 size: 64385 MB
> node 0 free: 56487 MB
> node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23
The other approaches of fixing the estimates, cost params, etc are the
right way of fixing it. *However* if you needed a quick fix for just this
report and can't find a way of setting it in Jaspersoft for just the report
(I don't think it will let you run multiple sql statements by default,
maybe
On Sun, Mar 15, 2015 at 8:07 AM, Robert Kaye wrote:
>
> what does free -m show on your db server?
>
>
> total used free sharedbuffers cached
> Mem: 48295 31673 16622 0 5 12670
> -/+ buffers/cache: 18997 29298
>
It sounds like you've hit the postgres basics, what about some of the linux
check list items?
what does free -m show on your db server?
If the load problem really is being caused by swapping when things really
shouldn't be swapping, it could be a matter of adjusting your swappiness -
what does ca
Johnny,
Sure thing, here's the system tap script:
#! /usr/bin/env stap
global pauses, counts
probe begin {
printf("%s\n", ctime(gettimeofday_s()))
}
probe kernel.function("compaction_alloc@mm/compaction.c").return {
elapsed_time = gettimeofday_us() - @entry(gettimeofday_us())
key = spri
Just as an update from my angle on the THP side... I put together a
systemtap script last night and so far it's confirming my theory (at least
in our environment). I want to go through some more data and make some
changes on our test box to see if we can make it go away before declaring
success -
I originally got started down that trail because running perf top while
having some of the slow query issues showed compaction_alloc at the top of
the list. That function is the THP page compaction which lead me to some
pages like:
http://www.olivierdoucet.info/blog/2012/05/19/debugging-a-mysql-st
similar results from turning THP off.
>
>
> On Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka wrote:
>
>> I've been looking into something on our system that sounds similar to
>> what you're seeing. I'm still researching it, but I'm suspecting the
>> me
action happening but that doesn't
necessarily mean it's impacting your running processes.
On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan wrote:
> # cat /sys/kernel/mm/redhat_transparent_hugepage/defrag
> [always] never
>
>
> On Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka w
Just out of curiosity, are you using transparent huge pages?
On Feb 5, 2013 5:03 PM, "Johnny Tan" wrote:
> Server specs:
> Dell R610
> dual E5645 hex-core 2.4GHz
> 192GB RAM
> RAID 1: 2x400GB SSD (OS + WAL logs)
> RAID 10: 4x400GB SSD (/var/lib/pgsql)
>
>
> /etc/sysctl.conf:
> kernel.msgmnb = 655
11 matches
Mail list logo