[PERFORM] How to force PostgreeSQL to work faster?
Title: Message Hi PostrgeSQL team,My PostrgeSQL installed as part of CYGWIN (Windows XP).I have compared performance PostrgeSQL to MS SQL (I used a little Java program with number of inserts in table).MS SQL is faster in 12 times :-(It's very strange results.Guys who developed this server: what you can tell in this - what customizations needs to increase of productivity?How to force PostgreeSQL to work faster? Speed (inserts/sec) Elapsed time (ms) MS SQL (Average): 295 39 869 testInsert 5000 263 18 977 255 19 619 306 16 334 testInsert 1 315 31 716 324 30 905 319 31 325 testInsert 2 241 82 919 313 63 922 317 63 101 PostrgreSQL (Average): 24 520 160 testInsert 5000 26 191 434 26 191 264 26 192 295 testInsert 1 22 463 669 25 393 510 24 409 528 testInsert 2 24 834 911 17 1 184 613 24 820 218 MS SQL is faster (times): 12 13 __With regards,Serge.
[PERFORM] Peformance of Update
Hi, Currently we are using postgresql 7.3 with Redhat linux 9. We find that when we try to execute 200,000 update statement through JDBC, the performance of degraded obviously for each update statement when comparing with less update statement(eg. 5000). Is there any suggestion that we can improve the performance for executing update statement at postgresql ? Thanks. Regards, Ricky.
Re: [PERFORM] PostgreSQL performance problem -> tuning
On Thu, 7 Aug 2003, Yaroslav Mazurak wrote: > Hi All! > > > Richard Huxton wrote: > > >>>On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote: > > sort_mem = 131072 > > >>>This sort_mem value is *very* large - that's 131MB for *each sort* that > > It's not TOO large *for PostgreSQL*. When I'm inserting a large amount > of data into tables, sort_mem helps. Value of 192M speeds up inserting > significantly (verified :))! If I remember right, this is on a PII-400 with 384 Megs of RAM. On a machine that small, 128Meg is probably too big for ensuring there are no swap storms. Once you force the box to swap you loose. > effective_cache_size = 65536 > > >>>So you typically get about 256MB cache usage in top/free? > > >>No, top shows 12-20Mb. > >>I'm reduced effective_cache_size to 4K blocks (16M?). > > > Cache size is in blocks of 8KB (usually) - it's a way of telling PG what > > the chances are of disk blocks being already cached by Linux. > > PostgreSQL is running on FreeBSD, memory block actually is 4Kb, but in > most cases documentation says about 8Kb... I don't know exactly about > real disk block size, but suspect that it's 4Kb. :) FYI effective cache size and shared_buffers are both measured in Postgresql sized blocks, which default to 8k but can be changed upon compile. So, effective_cache size for a machine that shows 128 Meg kernel cache and 20 meg buffers would be (138*2^20)/(8*2^10) -> (138*2^10)/8 -> 17664. > I'm afraid that this may be too long. :-((( > Yesterday I'm re-execute my query with all changes... after 700 (!) > minutes query failed with: "ERROR: Memory exhausted in AllocSetAlloc(104)". > I don't understand: result is actually 8K rows long only, but > PostgreSQL failed! Why?!! Function showcalc is recursive, but in my > query used with level 1 depth only (I know exactly). > Again: I think that this is PostgreSQL's lack of quality memory > management. :-( Can you run top while this is happening and see postgresql's memory usage climb or df the disks to see if they're filling up? could be swap is filling even. How much swap space do you have allocated, by the way? Also, you have to restart postgresql to get the changes to postgresql.conf to take effect. Just in case you haven't. Do a show all; in psql to see if the settings are what they should be. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] ext3 block size
On Wed, 6 Aug 2003, Wilson A. Galafassi Jr. wrote: > hello. > my database size is 5GB. what is the block size recommend? Well, the biggest block size currently supported by stock linux distros is 4k, so I'd go with that. Postgresql's default block size of 8k is fine also. Note that linux page/system/file block sizes are NOT related to Postgresql block sizes. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] PostgreSQL performance problem -> tuning
On 7 Aug 2003 at 10:05, Yaroslav Mazurak wrote: > > It needs to reflect how much cache the system is using - try the "free" > > command to see figures. > > I'm not found "free" utility on FreeBSD 4.7. :( Grr.. I don't like freeBSD for it's top output.Active/inactive/Wired.. Grr.. why can't it be shared buffered and cached? Same goes for HP-UX top. Looking at it one gets hardly any real information.. Anyway that's just me.. Top on freeBSD seems pretty unintuituive em but if you find any documentation on that, that would help you. ( Haven't booted in freeBSD in ages so no data out of my head..) You can try various sysctls on freeBSD. Basicalyl idea is to find out how much of memory is used and how much is cached. FreeBSD must be providing that one in some form.. IIRC there is a limit on filesystem cache on freeBSD. 300MB by default. If that is the case, you might have to raise it to make effective_cache_size really effective.. HTH Bye Shridhar -- Another war ... must it always be so? How many comrades have we lostin this way? ... Obedience. Duty. Death, and more death ... -- Romulan Commander, "Balance of Terror", stardate 1709.2 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] Perfomance Tuning
On Mon, 2003-08-11 at 15:16, Bruce Momjian wrote: That _would_ work if ext2 was a reliable file system --- it is not. Bruce- I'd like to know your evidence for this. I'm not refuting it, but I'm a >7 year linux user (including several clusters, all of which have run ext2 or ext3) and keep a fairly close ear to kernel newsgroups, announcements, and changelogs. I am aware that there have very occasionally been corruption problems, but my understanding is that these are fixed (and quickly). In any case, I'd say that your assertion is not widely known and I'd appreciate some data or references. As for PostgreSQL on ext2 and ext3, I recently switched from ext3 to ext2 (Stephen Tweedy was insightful to facilitate this backward compatibility). I did this because I had a 45M row update on one table that was taking inordinate time (killed after 10 hours), even though creating the database from backup takes ~4 hours including indexing (see pgsql-perform post on 2003/07/22). CPU usage was ~2% on an otherwise unloaded, fast, SCSI160 machine. vmstat io suggested that PostgreSQL was writing something on the order of 100x as many blocks as being read. My untested interpretation was that the update bookkeeping as well as data update were all getting journalled, the journal space would fill, get sync'd, then repeat. In effect, all blocks were being written TWICE just for the journalling, never mind the overhead for PostgreSQL transactions. This emphasizes that journals probably work best with short burst writes and syncing during lulls rather than sustained writes. I ended up solving the update issue without really updating, so ext2 timings aren't known. So, you may want to test this yourself if you're concerned. -Reece -- Reece Hart, Ph.D. [EMAIL PROTECTED], http://www.gene.com/ Genentech, Inc. 650/225-6133 (voice), -5389 (fax) Bioinformatics and Protein Engineering 1 DNA Way, MS-93http://www.in-machina.com/~reece/ South San Francisco, CA 94080-4990 [EMAIL PROTECTED], GPG: 0x25EC91A0
Re: [PERFORM] Odd problem with performance in duplicate database
Peter, > I'm sure that you've thought of this, but it sounds like you may not have > done an analyze in your new DB. Yes. Also a VACUUM. Also forcing a REINDEX on the major involved tables. Also running counts on the pg_* system tables to see if any objects did not get restored from the backup as compared with the live database. By everything I can measure, the live database and the test are identical; yet the test does not think that idx_caseclients_case is very accessable, and the live database knows it is. Is this perhaps a bug with ANALYZE statistics in 7.2.4? I know that in that version I don't have the option of increasing the statistics sampling ... -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] Perfomance Tuning
Neil Conway wrote: > On Mon, Aug 11, 2003 at 06:59:30PM -0400, Bruce Momjian wrote: > > Uh, the ext2 developers say it isn't 100% reliable --- at least that is > > that was told. I don't know any personally, but I mentioned it while I > > was visiting Red Hat, and they didn't refute it. > > IMHO, if we're going to say "don't use X on production PostgreSQL > systems", we need to have some better evidene than "no one has > said anything to the contrary, and I heard X is bad". If we can't > produce such evidence, we shouldn't say anything at all, and users > can decide what to use for themselves. > > (Not that I'm agreeing or disagreeing about ext2 in particular...) I don't use Linux and was just repeating what I had heard from others, and read in postings. I don't have any first-hand experience with ext2 (except for a laptop I borrowed that wouldn't boot after being shut off), but others on this mailing list have said the same thing. Here is another email talking about corrupting ext2 file systems: http://groups.google.com/groups?q=ext2+corrupt+%22power+failure%22&start=10&hl=en&lr=&ie=UTF-8&selm=20021128061318.GE18980%40ursine&rnum=11 >From his wording, I assume he is not talking about fsck-correctable corrupting. >From what I remember, the ext2 failure cases were quite small, but known by the ext2 developers, and considered too large a performance hit to correct. > > > My > > > untested interpretation was that the update bookkeeping as well as data > > > update were all getting journalled, the journal space would fill, get > > > sync'd, then repeat. In effect, all blocks were being written TWICE just > > > for the journalling, never mind the overhead for PostgreSQL > > > transactions. > > Journalling may or may not have been the culprit, but I doubt everything > was being written to disk twice: > > (a) ext3 does metadata-only journalling by default If that is true, why was I told people have to mount their ext3 file systems with metadata-only. Again, I have no experience myself, but why are people telling me this? > (b) PostgreSQL only fsyncs WAL records to disk, not the data itself Right. WAL recovers the data. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org