[PERFORM] Triggers or code?

2010-08-23 Thread DM
Hello There,

I have a table x and a history table x_hist, whats the best way to update
the history table.

should i need to use triggers or embed a code in my script to update the
history table?

what is the performance impact of a trigger versus embedding the code in the
script?

thanks for your time.

- Deepak


Re: [PERFORM] pgbench - tps for Postgresql-9.0.2 is more than tps for Postgresql-8.4.1

2011-01-27 Thread DM
Pg 9.0.2 is performing better than pg8.4.1

There are more transactions per second in pg9.0.2 than in pg8.4.1, which is
a better thing.

also below are kernel parameters that i used.


-- Shared Memory Limits 
max number of segments = 4096
max seg size (kbytes) = 15099492
max total shared memory (kbytes) = 15099492
min seg size (bytes) = 1

-- Semaphore Limits 
max number of arrays = 8192
max semaphores per array = 250
max semaphores system wide = 2048000
max ops per semop call = 32
semaphore max value = 32767

-- Messages: Limits 
max queues system wide = 16
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536


Is there anything that i can do to still improve 9.0.2 performance. the
performance (tps) that i got is only 10% is it ideal, or should i need to
get more?

Thanks
Deepak

On Wed, Jan 26, 2011 at 7:12 PM, DM  wrote:

> Hello All,
>
> I did a pgbench marking test and by comparing the tps of both versions, it
> looks like 8.4.1 performing better than 9.0.2.
>
> Let me know if I need to make any changes to Postgresql.conf of 9.0.2 file
> to improve its performance
>
>
> =
> Server Information:
> OS - CentOS - version 4.1.2
> CPU - Intel(R) Xeon(R) CPU   X5550  @ 2.67GHz
> 16 CPUS total
> RAM - 16GB
> ===
>
> Postgresql 8.4.1
> shared_buffers = 4GB
> checkpoint_segments = 3
> checkpoint_completion_target = 0.5
> wal_buffers = 64kB
> max_connections = 4096
>
> Postgresql 9.0.2
> shared_buffers = 4GB
> checkpoint_segments = 3
> checkpoint_completion_target = 0.5
> wal_buffers = 64KB
> max_connections = 4096
>
> (rest parameters are default)
> =
> 8.4.1 Analysis
>
> Iterations, Trans_type, Scale, Query_Mode, Clients, no.trans/client, no.
> trans processed, tps (wih connections estab), tps (without connections
> estab), DB Size
> 1/1, SELECT, 1, simple, 32, 2000, 64000/64000, 66501.728525, 70463.861398,
> 21 MB
> 1/2, SELECT, 1, simple, 32, 2000, 64000/64000, 66743.003977, 70702.841481,
> 21 MB
> 1/3, SELECT, 1, simple, 32, 2000, 64000/64000, 67547.172201, 71925.063075,
> 21 MB
> 5/1, SELECT, 5, simple, 32, 2000, 64000/64000, 56964.639200, 60009.939146,
> 80 MB
> 5/2, SELECT, 5, simple, 32, 2000, 64000/64000, 62999.318820, 67349.775799,
> 80 MB
> 5/3, SELECT, 5, simple, 32, 2000, 64000/64000, 64178.222925, 68242.135894,
> 80 MB
> 10/1, SELECT, 10, simple, 32, 2000, 64000/64000, 63754.926064,
> 67760.289506, 155 MB
> 10/2, SELECT, 10, simple, 32, 2000, 64000/64000, 62776.794166,
> 66902.637846, 155 MB
> 10/3, SELECT, 10, simple, 32, 2000, 64000/64000, 63354.794770,
> 67239.957345, 155 MB
> 20/1, SELECT, 20, simple, 32, 2000, 64000/64000, 63525.843107,
> 66996.134114, 305 MB
> 20/2, SELECT, 20, simple, 32, 2000, 64000/64000, 62432.263433,
> 66401.613559, 305 MB
> 20/3, SELECT, 20, simple, 32, 2000, 64000/64000, 63381.083717,
> 67308.339503, 305 MB
> 30/1, SELECT, 30, simple, 32, 2000, 64000/64000, 61896.090005,
> 65923.244742, 454 MB
> 30/2, SELECT, 30, simple, 32, 2000, 64000/64000, 62743.314161,
> 66192.699359, 454 MB
> 30/3, SELECT, 30, simple, 32, 2000, 64000/64000, 62526.378316,
> 66496.546336, 454 MB
> 40/1, SELECT, 40, simple, 32, 2000, 64000/64000, 61668.201948,
> 65381.511334, 604 MB
> 40/2, SELECT, 40, simple, 32, 2000, 64000/64000, 60185.106819,
> 64128.449284, 604 MB
> 40/3, SELECT, 40, simple, 32, 2000, 64000/64000, 60613.292874,
> 64453.754431, 604 MB
> 50/1, SELECT, 50, simple, 32, 2000, 64000/64000, 60863.172930,
> 64428.319468, 753 MB
> 50/2, SELECT, 50, simple, 32, 2000, 64000/64000, 61051.691704,
> 64447.977894, 753 MB
> 50/3, SELECT, 50, simple, 32, 2000, 64000/64000, 61442.988587,
> 65375.166630, 753 MB
> 75/1, SELECT, 75, simple, 32, 2000, 64000/64000, 59635.904169,
> 62949.189185, 1127 MB
> 75/2, SELECT, 75, simple, 32, 2000, 64000/64000, 60065.133129,
> 63538.645892, 1127 MB
> 75/3, SELECT, 75, simple, 32, 2000, 64000/64000, 61838.497170,
> 65818.634695, 1127 MB
> 100/1, SELECT, 100, simple, 32, 2000, 64000/64000, 57373.940935,
> 60575.027377, 1501 MB
> 100/2, SELECT, 100, simple, 32, 2000, 64000/64000, 58197.108149,
> 61314.721760, 1501 MB
> 100/3, SELECT, 100, simple, 32, 2000, 64000/64000, 57523.281200,
> 60991.938581, 1501 MB
> 200/1, SELECT, 200, simple, 32, 2000, 64000/64000, 52143.250545,
> 54823.997834, 2996 MB
> 200/2, SELECT, 200, simple, 32, 2000, 64000/64000, 51014.063940,
> 53368.779097, 2996 MB
> 200/3, SELECT, 200, simple, 32, 2000, 64000/64000, 56898.700754,
> 59677.499065, 2996 MB
> 500/1, SELECT, 500, simple, 32, 2000, 64000/64000, 53167.009206,
&

[PERFORM] pg9.0.3 explain analyze running very slow compared to a different box with much less configuration

2011-03-23 Thread DM
Hi All,

pg9.0.3 explain analyze running very slow compared to old box with much less
configuration.

But actual query is performing much better than the old server.

old Server===
OS: CentOS release 5.4 (Final)
Linux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64
x86_64 x86_64 GNU/Linux

RAM - 16GB
CPU - 8 Core
disk - 300GB
RAID10 on the disk

Postgresql 9.0.3

Postgres Config:
shared_buffers = 6GB
work_mem = 32MB
maintenance_work_mem = 512MB
effective_cache_size = 12GB

#explain analyze select * from photo;
 QUERY PLAN

 Seq Scan on photo  (cost=0.00..8326849.24 rows=395405824 width=168) (actual
time=5.632..157757.284 rows=395785382 loops=1)
 Total runtime: 187443.850 ms
(2 rows)

newServer===

CentOS release 5.4 (Final)
Linux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64
x86_64 x86_64 GNU/Linux

RAM - 64GB
CPU - 12 Core
disk - 1TB
RAID10 on the disk

Postgresql 9.0.3
Postgres Config:
shared_buffers = 16GB
work_mem = 32MB
maintenance_work_mem = 1024MB
effective_cache_size = 12GB


# explain analyze select * from photo;
QUERY PLAN

 Seq Scan on photo  (cost=0.00..8326810.24 rows=395579424 width=165) (actual
time=0.051..316879.928 rows=395648020 loops=1)
 Total runtime: 605703.206 ms
(2 rows)


I read other articles about the same issue but could not find the exact
solution.


I ran gettimeofday() on both machines and got the below results:

Results:

*[Old Server]# time /tmp/gtod*

real  0m0.915s

user  0m0.914s

sys   0m0.001s

*[New Server]#  time /tmp/gtod*

real  0m7.542s

user  0m7.540s

sys   0m0.001s


I am not sure how to fix this issue, any help would be in great assistance.


Thanks

Deepak


Re: [PERFORM] pg9.0.3 explain analyze running very slow compared to a different box with much less configuration

2011-03-24 Thread DM
Thank you for your research on and posting on it, when I first encountered
this issue I saw your posting/research on this issue, this gave me a great
insight.

gettimeofday() on my new box is slow, after further research we found that,
when we set ACPI=Off, we got a good clock performance even the explain
analyze gave approximately gave the right values, but the hyperthreading is
off.

could you guide me how to set, the parameter current_clocksource to TSC,


Thanks
Deepak

On Thu, Mar 24, 2011 at 5:07 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> Στις Thursday 24 March 2011 13:39:19 ο/η Marti Raudsepp έγραψε:
> > On Thu, Mar 24, 2011 at 11:11, Achilleas Mantzios
> >  wrote:
> > > My problem had to do with the speed of gettimeofday. You might want to
> do some special setting regarding
> > > your box's way of reading time for the hw clock.
> >
> > Just for extra info, on x86, TSC is usually the "fast" timeofday
> > implementation. On recent CPUs in single-socket configurations, TSC
> > should always be available, regardless of any power management. I
> > don't know about multi-socket. If you want to know whether your kernel
> > is using tsc, run:
> >
>
> That's what i am experiencing as well, in two of my FreeBSD boxes
> (work/home) i get:
>
> phenom ii X4 :
> ==
> % sysctl -a | grep -i timecounter
> kern.timecounter.tick: 1
> kern.timecounter.choice: TSC(-100) HPET(900) ACPI-fast(1000) i8254(0)
> dummy(-100)
> kern.timecounter.hardware: TSC
> kern.timecounter.stepwarnings: 0
> kern.timecounter.tc.i8254.mask: 65535
> kern.timecounter.tc.i8254.counter: 1960
> kern.timecounter.tc.i8254.frequency: 1193182
> kern.timecounter.tc.i8254.quality: 0
> kern.timecounter.tc.ACPI-fast.mask: 4294967295
> kern.timecounter.tc.ACPI-fast.counter: 3642319843
> kern.timecounter.tc.ACPI-fast.frequency: 3579545
> kern.timecounter.tc.ACPI-fast.quality: 1000
> kern.timecounter.tc.HPET.mask: 4294967295
> kern.timecounter.tc.HPET.counter: 1160619197
> kern.timecounter.tc.HPET.frequency: 14318180
> kern.timecounter.tc.HPET.quality: 900
> kern.timecounter.tc.TSC.mask: 4294967295
> kern.timecounter.tc.TSC.counter: 2788277817
> kern.timecounter.tc.TSC.frequency: 3400155810
> kern.timecounter.tc.TSC.quality: -100
> kern.timecounter.smp_tsc: 0
> kern.timecounter.invariant_tsc: 1
>
> Pentium 4
> ==
> % sysctl -a | grep -i timecounter
> kern.timecounter.tick: 1
> kern.timecounter.choice: TSC(800) ACPI-fast(1000) i8254(0) dummy(-100)
> kern.timecounter.hardware: ACPI-fast
> kern.timecounter.stepwarnings: 0
> kern.timecounter.tc.i8254.mask: 65535
> kern.timecounter.tc.i8254.counter: 13682
> kern.timecounter.tc.i8254.frequency: 1193182
> kern.timecounter.tc.i8254.quality: 0
> kern.timecounter.tc.ACPI-fast.mask: 16777215
> kern.timecounter.tc.ACPI-fast.counter: 6708142
> kern.timecounter.tc.ACPI-fast.frequency: 3579545
> kern.timecounter.tc.ACPI-fast.quality: 1000
> kern.timecounter.tc.TSC.mask: 4294967295
> kern.timecounter.tc.TSC.counter: 3109326068
> kern.timecounter.tc.TSC.frequency: 2663194296
> kern.timecounter.tc.TSC.quality: 800
> kern.timecounter.smp_tsc: 0
> kern.timecounter.invariant_tsc: 0
>
> TSC, it seems, outperform the rest of clocks in terms of frequency.
>
> > cat /sys/devices/system/clocksource/clocksource0/current_clocksource
> >
> > On older CPUs, you often had to disable some sort of power management
> > in order to get a stable TSC -- the "ondemand" scaling governor is the
> > top suspect. Disabling this is distro-specific. You have to reboot to
> > get the kernel to re-test TSC. Unfortunately disabling power
> > management later at boot doesn't help you, you have to prevent it from
> > activating at all.
> >
> > For debugging, grepping dmesg for tsc or clocksource is often helpful.
> > On machines with unstable TSC you'll see output like this:
> >
> > [0.00] Fast TSC calibration using PIT
> > [0.164068] checking TSC synchronization [CPU#0 -> CPU#1]: passed.
> > [0.196730] Switching to clocksource tsc
> > [0.261347] Marking TSC unstable due to TSC halts in idle
> > [0.261536] Switching to clocksource acpi_pm
> >
> > If you just want to get repeatable timings, you can force both
> > machines to use the hpet clocksource:
> > echo hpet >
> /sys/devices/system/clocksource/clocksource0/current_clocksource
> >
> > Marti
> >
>
>
>
> --
> Achilleas Mantzios
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


[PERFORM] Fragmentation/Vacuum, Analyze, Re-Index

2010-01-22 Thread DM
Hello All,

How to identify if a table requires full vacuum? How to identify when to do
re-index on an existing index of a table?

Is there any tool for the above?

Thanks
Deepak Murthy


Re: [PERFORM] Fragmentation/Vacuum, Analyze, Re-Index

2010-01-22 Thread DM
Is there any script/tool to identify if the table requires full vacuum? or
to re-index an existing index table?

Thanks
Deepak

On Fri, Jan 22, 2010 at 12:11 AM, DM  wrote:

> Hello All,
>
> How to identify if a table requires full vacuum? How to identify when to do
> re-index on an existing index of a table?
>
> Is there any tool for the above?
>
> Thanks
> Deepak Murthy
>
>
>