On Mar 31, 2008, at 8:23 PM, Ravi Chemudugunta wrote:
In general I would recommend that you benchmark them using
as-close-to-real load as possible again as-real-as-possible data.
I am running a benchmark with around 900,000 odd records (real-load on
the live machine :o ) ... should show hopef
Robins Tharakan wrote:
I think James was talking about Sybase. Postgresql on the other hand
has a slightly better way to do this.
SELECT ... FOR UPDATE allows you to lock a given row (based on the
SELECT ... WHERE clause) and update it... without worrying about a
concurrent modification. Of
I think James was talking about Sybase. Postgresql on the other hand has a
slightly better way to do this.
SELECT ... FOR UPDATE allows you to lock a given row (based on the SELECT
... WHERE clause) and update it... without worrying about a concurrent
modification. Of course, if the SELECT ... WHE
> I find myself having to do this in Sybase, but it sucks because there's
> a race - if there's no row updated then there's no lock and you race
> another thread doing the same thing. So you grab a row lock on a
> sacrificial row used as a mutex, or just a table lock. Or you just
> accept that some
Stephen Denne wrote:
A third option is to update, if not found, insert.
I find myself having to do this in Sybase, but it sucks because there's
a race - if there's no row updated then there's no lock and you race
another thread doing the same thing. So you grab a row lock on a
sacrificial ro
"Robins Tharakan" <[EMAIL PROTECTED]> writes:
> Would it fine to consider that an UPDATE query that found no records to
> update is (performance wise) the same as a SELECT query with the same WHERE
> clause ?
> As in, does an UPDATE query perform additional overhead even before it finds
> the reco
Coming to think of it.
Would it fine to consider that an UPDATE query that found no records to
update is (performance wise) the same as a SELECT query with the same WHERE
clause ?
As in, does an UPDATE query perform additional overhead even before it finds
the record to work on ?
*Robins*
On T
Hi, thanks for the quick reply.
> In general I would recommend that you benchmark them using
> as-close-to-real load as possible again as-real-as-possible data.
I am running a benchmark with around 900,000 odd records (real-load on
the live machine :o ) ... should show hopefully some good bench
Stephen Frost wrote
> * Ravi Chemudugunta ([EMAIL PROTECTED]) wrote:
> > Which version is faster?
>
> In general I would recommend that you benchmark them using
> as-close-to-real load as possible again as-real-as-possible data.
>
> > Does the exception mechanism add any overhead?
>
> Yes, using
* Ravi Chemudugunta ([EMAIL PROTECTED]) wrote:
> Which version is faster?
In general I would recommend that you benchmark them using
as-close-to-real load as possible again as-real-as-possible data.
> Does the exception mechanism add any overhead?
Yes, using exceptions adds a fair bit of overhea
10 matches
Mail list logo