);
Regards
John
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Frits Jalvingh
Sent: Friday, June 09, 2017 7:55 AM
To: Sunkara, Amrutha; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Improving PostgreSQL insert performance
I am not
Sequences are stored as a separate object in PostgreSQL.
Here in this example table and you can see that rec_id is a sequence number and
that the object name is: whiteboards_rec_id_seq
mydb=> \d whiteboards
Table "public.whiteboards"
Column |
Even though it's not listed in any of the documentation or “pg_dump --help” you
can check the return code of the process. A return code greater than 0 (zero)
usually indicates a failure
./bin >pg_dump -U dummy_user dummy_database; echo $?
1
From: pgsql-performance-ow...@postgresql.org
[mailto
Yes of course that’s all verified and taken into account during code
initialization
From: Vitalii Tymchyshyn [mailto:v...@tym.im]
Sent: Wednesday, February 22, 2017 8:14 PM
To: John Gorman; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Correct use of cursors for very large result sets
the time.
Regards
John
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mike Beaton
Sent: Tuesday, February 21, 2017 6:49 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in
?
Also the queries are not exactly the same. With psql you use "select *" and the
application specifies what columns it wants returned and the order to return
them. Try running the exact query on both.
Regards
John
-Original Message-
From: pgsql-performance-ow...@post
should know more after tonight.
Thanks again for your feedback and responses
Regards
John
-Original Message-
From: Gerardo Herzig [mailto:gher...@fmed.uba.ar]
Sent: Friday, May 13, 2016 4:11 PM
To: John Gorman
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database transaction
ocomp.com | 54263 | 2016-05-11 07:38:08.464797-07
| 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:08.335854-07 | f |
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID
LIMIT 1001
3709009 | p306|8671 |16387 | p306| |
| 1
vacuum_cost_page_miss | 10
vacuum_defer_cleanup_age| 0
vacuum_freeze_min_age | 5000
vacuum_freeze_table_age | 15000
John Gorman | Manager of Production Support, Architecture, Release Engineering
| Eldorado | a Division of MPHASIS | www.eldoinc.com/ |
5353 North 16th Street, Suite 400, Phoenix, Arizona 85016-3228 | Tel
602.604.3100 | Fax: 602.604.3115
32KL1i cache: 32KL2 cache: 256KL3 cache: 12288KNUMA node0 CPU(s): 0,12 users, load average: 0.00, 0.12, 0.37Please see the following for the explain analysis :http://explain.depesz.com/s/I3SLI'm trying to understand why I'm getting the yellow, orange, and red on the inclusive, and the yellow on the exclusive. (referring to the explain.depesz.com/s/I3SL page.)I'm relatively new to PostgreSQL, but I've been an Oracle DBA for some time. I suspect the I/O may be dragging but I don't know how to dig that information out from here. Please point out anything else you can decipher from this. Thanks,John
d to
allow for client-side caches. NFSv4 added all sorts of stateful behavior
which allows for much more aggressive caching.
Where did you read that you could not use caching with NFS?
--
John Melesky | Sr Database Administrator
503.284.7581 x204 | john.mele...@rentrak.com
RENTRAK | www.rentra
rming
many random reads initially to fill the caches with random blocks. That
might allow for minimal assistance from the cache and be more realistic.
On Thu, May 30, 2013 at 11:13 AM, Merlin Moncure wrote:
> On Mon, May 27, 2013 at 9:02 AM, John Mudd wrote:
> > Postgres 9.1.2 on Ubunt
t (cost=0.00..8.36 rows=1
width=21) (actual time=21.070..21.072 rows=1 loops=1)
Index Cond: (id = 50)
Total runtime: 21.178 ms
On Mon, May 27, 2013 at 10:59 AM, Evgeny Shishkin wrote:
>
> On May 27, 2013, at 6:35 PM, John Mudd wrote:
>
> Thanks, that's easy e
test_select (cost=0.00..8.36 rows=1
width=21) (actual time=23.072..23.074 rows=1 loops=1)
Index Cond: (id = 50)
Total runtime: 23.192 ms
On Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin wrote:
>
> On May 27, 2013, at 6:02 PM, John Mudd wrote:
>
> > Postgres 9.1.2 on Ubu
Postgres 9.1.2 on Ubuntu 12.04
Any reason why a select by primary key would be slower than a select that
includes an ORDER BY? I was really hoping using the primary key would give
me a boost.
I stopped the server and cleared the O/S cache using "sync; echo 3 >
/proc/sys/vm/drop_caches" between th
On 13/03/2013 19:23, Steve Crawford wrote:
On 03/13/2013 09:15 AM, John Lister wrote:
On 13/03/2013 15:50, Greg Jaskiewicz wrote:
SSDs have much shorter life then spinning drives, so what do you do
when one inevitably fails in your system ?
Define much shorter? I accept they have a limited no
he box up I did some
performance testing against the drives but with using different
combinations for each test - the aim here is to pre-load each drive
differently to prevent them failing when full simultaneously.
If you do go for raid 10 make sure to have a power fail endurance, ie
capacitor or battery
pport enhanced performance with just SSDs
(CTIO) by playing with the cache settings, but to be honest I haven't
noticed any difference and I'm not entirely sure it is enabled as there
is no indication that CTIO is actually enabled and working.
John
--
Sent via pgsql-performance mail
ware
to a non-striped (raid 1) volume on the new hardware. That could
explain the speed drop. Are the disks the same speed for the two
systems?
--
-- rouilj
John Rouillard System Administrator
Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111
--
S
optasc
(cost=0.00..0.40 rows=20 width=584) (actual time=0.001..21.651 rows=111308
loops=1)
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
It is impossible to suppress such nested-loop joins entirely,
but turning this variable off discourages the planner from using one
if there are other methods available, such as sorting the CTE for merge-join
or hashing it for hash-join.
The default is on.
John
version on W7.
Same results.
So this looks like I've got something messed up on W7.
Any ideas?
TIA.
--
John Kasarda, CFPIM, Jonah
Valid & Robust, Inc
your other queries running outside the transaction.
http://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis
and
http://www.postgresql.org/docs/9.2/static/sql-dropindex.html
--
-- rouilj
John Rouillard System Administrator
Renesys
protection against power failure, etc
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
* Load average was between 1.7 and 2.5.
* With 3.2:
* Max TPS was 4639.
* CPU was between 88 and 92% idle.
* Load average was between 3 and 4.
TIme to see what a 3.4 kernel does to my setup I think?
Thanks
John
--
Sent via pgsql-performance mailing list (pgsql-perfor
10k on a
xfs system which is much lower than I'd expect for that setup and isn't
significantly greater than your reported results, so something must be
very wrong.
Thanks
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
- unless there is some
wierd caching going on and it gets dumped at a later date?
John
--
Get the PriceGoblin Browser Addon
www.pricegoblin.co.uk
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
results for compative
purposes, so if I get a minute or two I'll create a site and stick mine
on there.
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 28/11/2012 19:21, Merlin Moncure wrote:
On Wed, Nov 28, 2012 at 12:37 PM, John Lister wrote:
Hi, I've just been benchmarking a new box I've got and running pgbench
yields what I thought was a slow tps count. It is dificult to find
comparisons online of other benchmark results, I
ing the tps values
are way off the expected.
Thanks
John
ps. the number of "safe" ssds available in the uk seems to be rather
limited, hence the intel 320s which I probably aren't as fast as modern
drives.
--
Sent via pgsql-performance mailing list (pgsql-performance@po
If you are inserting a lot of data into the same table, table extension locks
are a problem, and will be extended in only 8k increments which if you have a
lot of clients hitting/expanding the same table you are going to have a lot of
overhead.
-Original Message-
From: pgsql-performance
Sorry I forgot to attach the mentioned file with performance results.
Please find it enclosed now.
Regards
2012/9/7 John Nash :
> Hi,
>
> We have investigated further and have observed the following:
>
> We have another host with postgres installed in another IP. Called host 190.
e postgres database in host174 just presents
this behaviour when java is launched locally. Please can you help us?
Have dirty pages results some influence in this?
Thanks and regards,
John
2012/9/6 Greg Smith :
> On 09/03/2012 01:27 PM, Jeff Janes wrote:
>>
>> In any case, the behavior yo
2012 INSTALL
-rwxrwxrwx 1 postgsql gpostgre1489 Feb 23 2012 Makefile
-rwxrwxrwx 1 postgsql gpostgre1284 Feb 23 2012 README
drwxrwxrwx 14 postgsql gpostgre4096 Jun 4 10:30 src
2012/8/31 Jeff Janes :
> On Fri, Aug 31, 2012 at 5:27 AM, John Nash
> wrote:
>
>> -Pos
Hello,
We are doing some testing and have a very strange behaviour in the
performance obtained with postgres while executing a
Insert/select/delete transaction.
Software and Hardware details:
- O.S = Red hat 6.2
$ uname -a
Linux localhost.localdomain 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9
08:03
On 24/07/2012 21:12, Claudio Freire wrote:
On Tue, Jul 24, 2012 at 3:41 PM, Claudio Freire wrote:
On Tue, Jul 24, 2012 at 3:36 PM, John Lister wrote:
Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and
PG 9.1, I've modified pg_ctlcluster to cause pg_ct
On Tue, Jul 18, 2012 at 2:38 AM, Claudio Freire wrote:
>It must have been said already, but I'll repeat it just in case:
>I think postgres has an easy solution. Spawn the postmaster with
>"interleave", to allocate shared memory, and then switch to "local" on
>the backends.
Do you have a suggesti
posed speeds of the various components but wondered if anyone uses
more accurate methods?
Thanks
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hammerora is a good start but does have some issues when trying to get it
started. You can also try PGBench. As someone said, there is a plethora of
choices. It all depends on what you want to measure or accomplish.
John Jones
On Sat, Jul 14, 2012 at 1:48 AM, Craig Ringer wrote:
> On 07
Given a baseline postgresql.conf config and a couple DL580 40 core/256GB memory
I noticed a large over head for pgbouncer, has anyone seen this before?
$ pgbench -h `hostname -i` -j 32 -p 4320 -U asgprod -s 500 -c 32 -S -T 60
pgbench_500
Scale option ignored, using pgbench_branches table count
se drive for this - the choice
of enterprise drives is limited :(
I was thinking if we have sudden power failure then mark the consumer drive as
bad and rebuild it from the other one,
or is this highly risky?
Does any one do anything different?
Thanks
John
--
Sent via pgsql-performance maili
On 03/05/2012 16:46, Craig James wrote:
On Thu, May 3, 2012 at 6:42 AM, Jan Nielsen wrote:
Hi John,
On Thu, May 3, 2012 at 12:54 AM, John Lister
wrote:
I was wondering if it would be better to put the xlog on the same disk as
the OS? Apart from the occasional log writes I'd have thought
On 03/05/2012 03:10, Jan Nielsen wrote:
300GB RAID10 2x15k drive for OS on local storage
*/dev/sda1 RA*4096
*/dev/sda1 FS*ext4
*/dev/sda1 MO*
600GB RAID 10 8x15k drive for $PGDATA on SAN
*IO Scheduler sda* n
7;m getting the most out of my setup and any
additional guidance would be very helpful.
Thanks
John
--
www.pricegoblin.co.uk
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Check your pagecache settings, when doing heavy io writes of a large file you
can basically force a linux box to completely stall. At some point once the
pagecache has reached it's limit it'll force all IO to go sync basically from
my understanding. We are still fighting with this but lots of
Are your stats updated on the table after you added the index?
- run the bad query with explain verbose on (you should send this anyways)
- check to see what the difference is in expected rows vs. actual rows
- make sure that your work_mem is high enough if you are sorting, if not you'll
see it w
the server
seemed to have stopped the issue for now.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Scott Marlowe
Sent: Friday, December 16, 2011 3:16 PM
To: Greg Smith
Cc: pgsql-performance@postgresql.org
a bit counterproductive.
Thoughts?
___
| John W. Strange | Vice President | Global Commodities Technology
| J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 |
F: 713 236-
| john.w
ifferent game. So I can't help but feel I must be missing
something really important either in how I'm setting up my data or how
I'm processing.
Thanks,
John
---
John Williams
42nd Design
Email: jwilli...@42nddesign.com
Skype: druidjaidan
Phone: (520) 440-7239
On Wed, Jun 8, 2
ms to work. The queries always spend a large amount of
time in the same place. Is there something I missing that could
improve these or even a way to rework my schema to speed things up.
Thanks,
John
SELECT "logparser_entry"."id" ,
&q
On Wed, May 25, 2011 at 03:19:59PM -0500, Kevin Grittner wrote:
> John Rouillard wrote:
> > On Mon, May 23, 2011 at 05:21:04PM -0500, Kevin Grittner wrote:
> >> John Rouillard wrote:
> >>
> >> > I seem to be able to provoke this error:
> >> >
On Mon, May 23, 2011 at 05:21:04PM -0500, Kevin Grittner wrote:
> John Rouillard wrote:
>
> > I seem to be able to provoke this error:
> >
> >vacuum...ERROR: invalid page header in
> > block 2128910 of relation base/16385/21476
>
>
disk wd2003; controller LSI MegaRAID SAS 9260-4i
Could it be an ext4 issue? It seems that ext4 may still be at the
bleeding edge for postgres use.
Thanks for any thoughts even if it's go to the admin list.
--
-- rouilj
John Rouillard System Administ
=142) (actual
time=6000.728..12037.492 rows=315635 loops=1)"
" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5,
s.d6, s.d7, s.d8, s.d9, s.d10, sq.v"
" Hash Cond: (s.scenarioid = sq.scenarioid)"
" Buffers: shared hit=9763&qu
ops
Maybe the completion target percentage is off because of the threads?
--
-- rouilj
John Rouillard System Administrator
Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111
--
Sent via pgsql-performance mailing list (pgsql-performa
On Mon, May 16, 2011 at 12:23:13PM -0400, Jeff wrote:
> On May 16, 2011, at 9:17 AM, John Rouillard wrote:
> >However, in my case I have an 8 disk raid 10 with a read only load (in
> >this testing configuration). Shouldn't I expect more iops than a
> >single disk can
On Sat, May 14, 2011 at 12:07:02PM -0500, k...@rice.edu wrote:
> On Fri, May 13, 2011 at 09:09:41PM +0000, John Rouillard wrote:
> > I am adding pgiosim to our testing for new database hardware and I am
> > seeing something I don't quite get and I think it's becau
adata although that
seems like a lot of data
If I stop the pgiosim, the iostat drops to 0 write and reads as
expected.
So does anybody have any comments on how to test with pgiosim and how
to correlate the iostat and pgiosim outputs?
Thanks for your feedback.
--
t storage as fast as
the data is coming in. (I.E. make it a writeback with a longer delay
so it's more likely to drop data.)
Does anybody have any comments or testing methodologies that don't
involve using an actual postgres instance?
Thanks for your help.
--
tch/cursor change the steps I should take in
tuning it compared to somebody just issuing a normal select?
Thanks for any ideas.
--
-- rouilj
John Rouillard System Administrator
Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111
--
Sent via pg
u.com/dmsp/docs/wp-nehalem-ex-memory-performance-ww-en.pdf
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Merlin Moncure
Sent: Tuesday, April 12, 2011 12:14 PM
To: Greg Smith
Cc: Kevin Grittner; da...@la
Just some information on our setup:
- HP DL585 G6
- 4 x AMD Opteron 8435 (24 cores)
- 256GB RAM
- 2 FusionIO 640GB PCI-SSD (RAID0)
- dual 10GB ethernet.
- we have several tables that we store calculated values in.
- these are inserted by a compute farm that calculates the results and stores
the
This can be resolved by partitioning the disk with a larger write spare area so
that the cells don't have to by recycled so often. There is a lot of
"misinformation" about SSD's, there are some great articles on anandtech that
really explain how the technology works and some of the differences b
If it's a HP box you can also turn this off via the bios via your RBSU:
Starting with HP ProLiant G6 servers that utilize Intel® Xeon® processors,
setting the HP Power Profile
Option in RBSU to Maximum Performance Mode sets these recommended additional
low-latency options
for minimum BIOS late
> >> When we 'EXPLAIN' this query, PostgreSQL says it is using the index
> >> idx_receiveddatetime. The way the application is designed means
that
> >> in virtually all cases the query will have to scan a very long way
> >> into idx_receiveddatetime to find the first record where userid =
> 311369
much of the table
and our application will not work. What are we doing wrong?
Cheers now,
John
Scott, are you really moving that much data through memory, 70-80GB/sec is the
limit of the new intel 7500 series in a best case scenario.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Scott Marlowe
from pg_stat_activity, and check what the largest objects are based on
relpages from pg_class.
- check index scans/table scans from pg_statio tables if you have
track_activities on in the .conf file.
- John
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Beha
segments
temporarily during bulk data loads, the number of checkpoints that are required
can be reduced.
- John
[ 2011-02-15 09:34:30.549 GMT ] :4d404b6e.47ef LOG: checkpoint starting: xlog
[ 2011-02-15 09:34:43.656 GMT ] :4d404b6e.47ef LOG: checkpoint complete:
wrote 36135 buffers (0.4%); 0 t
eed to test your raid card batteries (nothing like having a
battery with only a 6 hour runtime when it takes you a couple of days
MTTR), can your database app survive with that low a commit rate? As
you said you ar expecting something almost 4-5x faster with 7200 rpm
disks.
--
d the word fusionIO :)
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Lars
Sent: Tuesday, January 18, 2011 4:57 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Migrating to Postgresql an
Test,
Sorry trying to fix why my email is getting formatted to bits when posting to
the list.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Strange, John W
Sent: Tuesday, January 04, 2011 1:01 PM
To
This has gotten a lot better with the 2.x drivers as well.
I'm completely aware of the FusionIO and it's advantages/disadvantages.. I'm
currently getting the following pgbench results but still only hitting the
array for about 800MB/sec, short of the 3GB/sec that it's capable of. This is
simpl
ve attached the randwrite performance test
below, as you can see going from 8K -> 32K -> 1M blocksize the drives really
start to move.
Thanks,
- John
[v025...@athenaash05 /]$ fio --filename=/fusionIO/export1/test1 --size=25G
--bs=8k --direct=1 --rw=randwrite --numjobs=8 --runtime=30 --gr
expected power failure.)
When a write() to a Fusion-io device has been acknowledged, the data is
guaranteed to be stored safely. This is a strict requirement for any
enterprise-ready storage device.
Thanks,
John Cagle
Fusion-io, Inc.
Confidentiality Notice: This e-mail message, its contents and
they would like to see done please let me
know.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Przemek Wozniak
Sent: Thursday, December 23, 2010 11:38 AM
To: pgsql-performance@postgresql.org
Subject
How can you tell when your indexes are starting to get bloated and when you
need to rebuild them. I haven't seen a quick way to tell and not sure if it's
being tracked.
___
| John
over 600MB/sec on these for a whopping cost of $400 and a little
of my time.
SSD's can be very useful, but you have to be aware of the shortcomings and how
to overcome them.
- John
-Original Message-
From: Marti Raudsepp [mailto:ma...@juffo.org]
Sent: Thursday, December 09, 2010
oad:
http://www.fusionio.com/products/iodrive/ - BEST in slot currently IMHO.
http://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E) - not a
bad alternative.
There are other SSD controllers on the market but I have experience with both
so I can recommend both as well.
- J
On 12/6/10 10:03 AM, Tom Lane wrote:
> John Papandriopoulos writes:
>> I am still wondering whether the inheritance_planner(...) can be avoided if
>> the rowtypes of children are the same as the parent?
>
> Possibly, but it's far from a trivial change. The difficul
er the inheritance_planner(...) can be avoided if the
rowtypes of children are the same as the parent? (I'm not yet sufficiently
familiar with the source to determine on my own.) If that's the case, is there
a simple test (like cardinality of columns) that can be used to differentiate
you're using INSERT triggers, you'd want to make sure your plpgsql
function is fast: I'm partitioning by power-of-two, so can use right-shift
n-bits to quickly compute the insertion table name, rather than using an
if-else-if chain.
John
--
Sent via pgsql-performance mailing
s at any level
in the inheritance hierarchy. You've been a great help in helping my
understanding PostgreSQL inheritance.
Best,
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
you use the "ONLY" keyword,
they work again: see my original post of this thread. In that case, the
application SQL still retrains some simplicity. On this topic, I think there's
quite a bit of confusion and updates to the documentation would help greatly.
John
--
Sent via pgsql-
On 12/4/10 8:42 AM, Tom Lane wrote:
John Papandriopoulos writes:
I've recreated the same example with just one parent table, and 4096 child
tables.
SELECT query planning is lightning fast as before; DELETE and UPDATE cause my
machine to swap.
What's different about DELETE
On 12/3/10 10:20 PM, Tom Lane wrote:
> John Papandriopoulos writes:
>> I've found that a k-ary table inheritance tree works quite well to
>> reduce the O(n) CHECK constraint overhead [1] in the query planner
>> when enabling partition constraint exclusion.
>
>
ne plpgsql functions to
UPDATE or DELETE the leaf tables directly, but using such an interface isn't
terribly elegant.
I therefore tried writing the plpgsql functions for UPDATE and DELETE anyway,
with the idea of linking to a TRIGGER on the parent ptest table. This didn't
On 07/29/10 2:58 PM, Dino Vliet wrote:
Dear postgresql list,
I have some troubles generating data for a analysis task at hand.
I have a table (table A) containing 5 million records and 28 number of
attributes. This table is 461MB big if I copy it to a csv file.
I want to create another
output to make sure the bbu is enabled and the cache is turned on
for the raid array u0 or u1 ...?
--
-- rouilj
John Rouillard System Administrator
Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111
--
Sent via pgsql-performance mailing list
tions I get:
postgres=# show debug_assertions;
debug_assertions
--
on
(1 row)
Can you let us know when the corrected packages have become available?
Regards,
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tirely to local memory before printing to
standard out.
I think it grabs the whole result set to calculate the display column
widths. I think there is an option to tweak this but don't remember which,
have a look at the psql commands (\?), formatting section.
--
John E. B
Ah, you're right. Thanks Hannu, that's a good solution.
Hannu Krosing wrote:
On Fri, 2010-04-02 at 16:28 -0400, Beaver, John E wrote:
...
I know that the query used here could have been a COPY statement, which I assume would
be better-behaved, but I'm more conc
Does the psql executable have any ability to do a "fetch many", using a
server-side named cursor, when returning results? It seems like it tries to
retrieve the query entirely to local memory before printing to standard out.
Specifically, I've tried running the following command on my desktop, w
Dear PostgreSQL Creators, I am frequently using PostgreSQL server to manage the
data, but I am stuck ed now with a problem of large objects deleting, namely it
works too slow. E.g., deleting of 900 large objects of 1 Mb size takes around
2.31 minutes. This dataset is not largest one which I am w
they are less than a 5 day capacity (failure
over a long weekend 3 days + 1-2 day(s) to fix the issue (replace
power supply, mobo etc.)).
--
-- rouilj
John Rouillard System Administrator
Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111
--
Sent v
hat makes it any faster.
>
> Secondly, the first time you run this query you are reading the 1.8G
> table sequentially, and at about 55MB/s, which isn't gonna get faster
> without more / faster drives under your machine.
>
> On Thu, Aug 6, 2009 at 12:50 AM, Ip Wing Kin John w
ote:
>> Sorry post again.
>
> Nope, still mangled. Can you attach it?
>
--
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
/hdd2/indexes/699221/732287
2379 postgres 832472
/export/space/pg_data/pg_data/data/base/699221/707710.1
2379 postgres 1048576
/export/space/pg_data/pg_data/data/base/699221/707710
--
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chang
Thanks for your help, Tom. I am certainly amused and pleased that my
exact use case is handled in the very next PostgreSQL release.
Take care,
John L. Clark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Tue, Apr 21, 2009 at 12:05 PM, John L. Clark wrote:
> On Tue, Apr 21, 2009 at 10:50 AM, Tom Lane wrote:
>> In that case you're going to need to provide a reproducible test case,
>> 'cause it worksforme.
>
> Ok. I scaled back my example by just selecting 1000 &
1 - 100 of 427 matches
Mail list logo