Thanks. That's the best reason I can think of too.
"Less expensive" feels like taking shortcuts to speed up things and not a
good reason to make decision on.
Although maintain backward compatibility makes sense (to some extend), if
old PG doesn't behave properly, it should be fixed, shouldn't i
Simon Riggs wrote:
>
> The real question is why anyone would actually perform that kind of
> UPDATE. It doesn't really make much sense to increment a PK value.
>
> PostgreSQL is good at supporting things people want and need, so
> differences do exist in places that are fairly low priority.
>
On Wed, Jul 20, 2011 at 9:58 AM, Rob Richardson
wrote:
> It seems to me that it is inherently wrong to perform any operation on a
> database that depends on the order in which records are retrieved,
> without specifying that order in an ORDER BY clause. The "update t1 set
> f1 = f1 + 1" assumes t
It seems to me that it is inherently wrong to perform any operation on a
database that depends on the order in which records are retrieved,
without specifying that order in an ORDER BY clause. The "update t1 set
f1 = f1 + 1" assumes that the operation will be performed in an order
that guarantees
On Wed, Jul 20, 2011 at 4:18 PM, Samuel Hwang wrote:
> The real question is that why PostgreSQL behaves differently than
> other major DBMS. IMHO, doing checking at set operation boundary is
> more appropriate than at row boundary.
The real question is why anyone would actually perform that kind
Samuel Hwang writes:
> Thanks. But I am not looking for how to avoid the problem at this
> moment,
> I am more interested in why PostgreSQL is designed to work this way.
Performance --- it's significantly more expensive to do a deferred
uniqueness check, since you have to visit the index a second
Thanks for the reply.
You are right, the result is all or nothing, so it's still atomic. I
found my mistake and posted a clarification for my question.
I know in PostgreSQL 9.0 unique constraint can be set to deferrable.
However still no luck for unique indexes.
The real question is that why Pos
Thanks. But I am not looking for how to avoid the problem at this
moment,
I am more interested in why PostgreSQL is designed to work this way.
To make the problem more obvious,
drop table if exists t1;
create table t1 (f1 int);
create unique index uix_t1 on t1(f1) ;
insert into t1(f1) values (1),
On Tue, Jul 19, 2011 at 10:41 PM, wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On Mon, Jul 18, 2011 at 03:18:43PM -0700, Shianmiin wrote:
>> setup:
>>
>> drop table if exists t1;
>> create table t1 (f1 int);
>> create unique index uix_t1 on t1(f1) ;
>> insert into t1(f1) valu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, Jul 18, 2011 at 03:18:43PM -0700, Shianmiin wrote:
> setup:
>
> drop table if exists t1;
> create table t1 (f1 int);
> create unique index uix_t1 on t1(f1) ;
> insert into t1(f1) values (1), (2), (3);
> select * from t1;
>
> f1
> ---
> 1
Shianmiin wrote:
>
> I thought the Atomic should be at statement level, could anyone tell me
> why PostgreSQL behaves differently?
>
re-state for clarification purpose.
Since SQL is a set language, there is no concept of row order. I thought the
checking should be on a per set operation (i.e.
setup:
drop table if exists t1;
create table t1 (f1 int);
create unique index uix_t1 on t1(f1) ;
insert into t1(f1) values (1), (2), (3);
select * from t1;
f1
---
1
2
3
test statement:
update t1 set f1 = f1 + 1;
In PostgreSQL I got,
ERROR: duplicate key value violates unique c
12 matches
Mail list logo