Re: [PERFORM] Wierd context-switching issue on Xeon
Same problem with dual 1Ghz P3's running Postgres 7.4.2, linux 2.4.x, and 2GB ram, under load, with long transactions (i.e. 1 "cannot serialize" rollback per minute). 200K was the worst observed with vmstat. Finally moved DB to a single xeon box. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] Context Switching issue: Spinlock doesn't fix.
Hello, Is context switching problem resolved in 8.0? Can I drop in another Xeon? Thanks, Jelle On Wed, 2 Jun 2004, Josh Berkus wrote: > Folks, > > I've been testing varying SPINS_PER_DELAY in a client's installation of > PostgreSQL against a copy of a production database, to test varying this > statistic as a way of fixing the issue. > > It does not seem to work. > > I've tested all of the following graduated levels: > > 100 (the original) > 250 > 500 > 1000 > 2000 > 5000 > 1 > 2 > 3 > 5 > > None of these quantities seem to make any difference at all in the number of > context switches -- neither down nor up. Seems to me like this is a dead > end. Does anyone have test results that show otherwise? > > -- http://www.jibjab.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[PERFORM] iostat question
Hello All, I have an iostat question in that one of the raid arrays seems to act differently than the other 3. Is this reasonable behavior for the database or should I suspect a hardware or configuration problem? But first some background: Postgresql 7.4.2 Linux 2.4.20, 2GB RAM, 1-Xeon 2.4ghz with HT turned off 3Ware SATA RAID controller with 8 identical drives configured as 4 RAID-1 spindles 64MB RAM disk postgresql.conf differences to postgresql.conf.sample: tcpip_socket = true max_connections = 128 shared_buffers = 2048 vacuum_mem = 16384 max_fsm_pages = 5 wal_buffers = 128 checkpoint_segments = 64 effective_cache_size = 196000 random_page_cost = 1 default_statistics_target = 100 stats_command_string = true stats_block_level = true stats_row_level = true The database is spread over 5 spindles: /ram0 holds the busiest insert/update/delete table and assoc. indexes for temporary session data /sda5 holds the OS and most of the tables and indexes /sdb2 holds the WAL /sdc1 holds the 2nd busiest i/u/d table (70% of the writes) /sdd1 holds the single index for that busy table on/sdc1 Lately we have 45 connections open from a python/psycopg connection pool. 99% of the reads are cached. No swapping. And finally iostat reports: Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/swkB/s avgrq-sz avgqu-sz await svctm %util /dev/sda50.01 3.32 0.01 0.680.16 32.96 0.0816.48 48.61 0.09 12.16 2.01 0.14 /dev/sdb20.00 6.38 0.00 3.540.01 79.36 0.0039.68 22.39 0.123.52 1.02 0.36 /dev/sdc10.03 0.13 0.00 0.080.271.69 0.13 0.84 24.06 0.13 163.28 13.75 0.11 /dev/sdd10.01 8.67 0.00 0.770.06 82.35 0.0341.18 107.54 0.09 10.51 2.76 0.21 The /sdc1's await seems awfully long compared to the rest to the stats. Jelle -- http://www.sv650.org/audiovisual/loading_a_bike.mpeg Osama-in-October office pool. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[PERFORM] Questions about 2 databases.
Hello All, I have a couple of questions about running 2 databases: 1) on a single 7.4.6 postgres instance does each database have it own WAL file or is that shared? Is it the same on 8.0.x? 2) what's the high performance way of moving 200 rows between similar tables on different databases? Does it matter if the databases are on the same or seperate postgres instances? Background: My web app does lots of inserts that aren't read until a session is complete. The plan is to put the heavy insert session onto a ramdisk based pg-db and transfer the relevant data to the master pg-db upon session completion. Currently running 7.4.6. Individual session data is not as critical as the master pg-db so the risk associated with running the session pg-db on a ramdisk is acceptable. All this is to get past the I/O bottleneck, already tweaked the config files, run on multiple RAID-1 spindles, profiled the queries, maxed the CPU/ram. Migrating to 64bit fedora soon. Thanks, this mailing list has been invaluable. Jelle ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Questions about 2 databases.
On Fri, 11 Mar 2005, Tom Lane wrote: [ snip ] COPY would be my recommendation. For a no-programming-effort solution you could just pipe the output of pg_dump --data-only -t mytable into psql. Not sure if it's worth developing a custom application to replace that. I'm a programming-effort kind of guy so I'll try COPY. My web app does lots of inserts that aren't read until a session is complete. The plan is to put the heavy insert session onto a ramdisk based pg-db and transfer the relevant data to the master pg-db upon session completion. Currently running 7.4.6. Unless you have a large proportion of sessions that are abandoned and hence never need be transferred to the main database at all, this seems like a dead waste of effort :-(. The work to put the data into the main database isn't lessened at all; you've just added extra work to manage the buffer database. The insert heavy sessions average 175 page hits generating XML, 1000 insert/updates which comprise 90% of the insert/update load, of which 200 inserts need to be transferred to the master db. The other sessions are read/cache bound. I hoping to get a speed-up from moving the temporary stuff off the master db and using 1 transaction instead of 175 to the disk based master db. Thanks, Jelle ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings