Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-04 Thread Listmail
Any suggestions for finding out where all the time is being spent? I'm - time spent updating indexes ? do you have a lot of them ? - try with fsync off (not in production, but it will give you an idea) ---(end of broadcast)--- TI

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-04 Thread Steve Gerhardt
The problem with this approach is really that the tracker + website combination is designed to show really detailed statistics for every user on every torrent, which (as you mentioned) complicates things a great deal. It's also able to store all the history information for all the clients, whic

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-04 Thread Steve Gerhardt
Tom Lane wrote: You're focusing on the wrong thing --- there's nothing wrong with the plan. It's only taking 9 seconds to perform the merge join. The other 183 seconds are going somewhere else; you need to find out where. One thing that came to mind was triggers, which would be shown in the EX

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-04 Thread Martijn van Oosterhout
On Wed, Apr 04, 2007 at 11:34:22AM +0200, Tommy Gildseth wrote: > >For the record, this is what the SQL MERGE command is for... I don't > >think anyone is working on implementing that though... > > > This will possibly provide a solution to this question: > http://www.postgresql.org/docs/current/

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-04 Thread Tommy Gildseth
Martijn van Oosterhout wrote: On Mon, Apr 02, 2007 at 08:24:46PM -0700, Steve Gerhardt wrote: I've been working for the past few weeks on porting a closed source BitTorrent tracker to use PostgreSQL instead of MySQL for storing statistical data, but I've run in to a rather large snag. The tra

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Martijn van Oosterhout
On Mon, Apr 02, 2007 at 08:24:46PM -0700, Steve Gerhardt wrote: > I've been working for the past few weeks on porting a closed source > BitTorrent tracker to use PostgreSQL instead of MySQL for storing > statistical data, but I've run in to a rather large snag. The tracker in > question buffers its

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Tom Lane
Steve Gerhardt <[EMAIL PROTECTED]> writes: > # EXPLAIN ANALYZE UPDATE peers2...etc etc > QUERY PLAN > - > Merge Join (cost=262518.76..271950.65 rows=14933 width=153) (actual > time=8477.422..9216.893 rows=26917 loops=1) >

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Listmail
I can't help but think that the way this application writes data is optimized for MySQL's transactionless table type, where lots of simultaneous input streams writing at the same time to the same table would be death. Can you step back and work on how the app writes out data, so that it opens a

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Jonathan Vanasco
On Apr 3, 2007, at 11:44 AM, Scott Marlowe wrote: I can't help but think that the way this application writes data is optimized for MySQL's transactionless table type, where lots of simultaneous input streams writing at the same time to the same table would be death. Can you step back and work

Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Scott Marlowe
On Mon, 2007-04-02 at 22:24, Steve Gerhardt wrote: > I've been working for the past few weeks on porting a closed source > BitTorrent tracker to use PostgreSQL instead of MySQL for storing > statistical data, but I've run in to a rather large snag. The tracker in > question buffers its updates to t

[GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Steve Gerhardt
I've been working for the past few weeks on porting a closed source BitTorrent tracker to use PostgreSQL instead of MySQL for storing statistical data, but I've run in to a rather large snag. The tracker in question buffers its updates to the database, then makes them all at once, sending anywhere