Any suggestions for finding out where all the time is being spent? I'm
- time spent updating indexes ? do you have a lot of them ?
- try with fsync off (not in production, but it will give you an idea)
---(end of broadcast)---
TI
The problem with this approach is really that the tracker + website
combination is designed to show really detailed statistics for every
user on every torrent, which (as you mentioned) complicates things a
great deal. It's also able to store all the history information for all
the clients, whic
Tom Lane wrote:
You're focusing on the wrong thing --- there's nothing wrong with the plan.
It's only taking 9 seconds to perform the merge join. The other 183
seconds are going somewhere else; you need to find out where.
One thing that came to mind was triggers, which would be shown in the
EX
On Wed, Apr 04, 2007 at 11:34:22AM +0200, Tommy Gildseth wrote:
> >For the record, this is what the SQL MERGE command is for... I don't
> >think anyone is working on implementing that though...
> >
> This will possibly provide a solution to this question:
> http://www.postgresql.org/docs/current/
Martijn van Oosterhout wrote:
On Mon, Apr 02, 2007 at 08:24:46PM -0700, Steve Gerhardt wrote:
I've been working for the past few weeks on porting a closed source
BitTorrent tracker to use PostgreSQL instead of MySQL for storing
statistical data, but I've run in to a rather large snag. The tra
On Mon, Apr 02, 2007 at 08:24:46PM -0700, Steve Gerhardt wrote:
> I've been working for the past few weeks on porting a closed source
> BitTorrent tracker to use PostgreSQL instead of MySQL for storing
> statistical data, but I've run in to a rather large snag. The tracker in
> question buffers its
Steve Gerhardt <[EMAIL PROTECTED]> writes:
> # EXPLAIN ANALYZE UPDATE peers2...etc etc
> QUERY PLAN
> -
> Merge Join (cost=262518.76..271950.65 rows=14933 width=153) (actual
> time=8477.422..9216.893 rows=26917 loops=1)
>
I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.
Can you step back and work on how the app writes out data, so that it
opens a
On Apr 3, 2007, at 11:44 AM, Scott Marlowe wrote:
I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.
Can you step back and work
On Mon, 2007-04-02 at 22:24, Steve Gerhardt wrote:
> I've been working for the past few weeks on porting a closed source
> BitTorrent tracker to use PostgreSQL instead of MySQL for storing
> statistical data, but I've run in to a rather large snag. The tracker in
> question buffers its updates to t
I've been working for the past few weeks on porting a closed source
BitTorrent tracker to use PostgreSQL instead of MySQL for storing
statistical data, but I've run in to a rather large snag. The tracker in
question buffers its updates to the database, then makes them all at
once, sending anywhere
11 matches
Mail list logo