Re: [GENERAL] trigger Before or After

2014-11-11 Thread Albe Laurenz
avpro avpro wrote:
> in the pgsql documentation
> (http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)
> 
> 
> i haven't seen anything referring to: how is affected the data inserted in 
> the new table by a trigger
> Before Insert compared with a trigger After Insert? and anything related to 
> performance

In your example (the trigger updates a second table) it should make
no difference if the trigger is BEFORE or AFTER INSERT.

The difference is that in a BEFORE trigger you can modify the values that
will be inserted before the INSERT actually happens.

> I read somewhere (I don't find the link anymore) that if the trigger is After 
> Insert, the data
> available in the table LOG might not be available anymore to run the trigger. 
> is that correct? or I
> might understood wrong?

I don't quite understand.
You will have access to the OLD and NEW values in both BEFORE and AFTER 
triggers.
In an AFTER trigger, the table row has already been modified.

> what's the difference related to performance concerning a trigger Before 
> Insert compared with a
> trigger After Insert?

I don't think that there is a big difference, but you can easily test it:
Insert 10 rows with a BEFORE trigger on the table and compare the
time it takes to inserting 10 rows with an AFTER trigger.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reindex table deadlock

2014-11-11 Thread jaime soler
El vie, 07-11-2014 a las 10:02 -0500, Dan H escribió:
> Hi,
> 
> I encountered a deadlock while running 'reindex table TABLE1' in
> postgresql version 9.2.4
> The postgresql logs shows the two offending processes.
> 
> 1st process was running reindex table TABLE1
> waiting for AccessExclusiveLock on primary key index of TABLE1
> 
> 2nd process was running stored procedure that executes selects and
> deletes to TABLE1
> waiting for RowExclusiveLock on TABLE1.
> 
> Is this the same lock upgrade deadlock issue that someone has
> previously mentioned with reindex?

Why don't you use create index concurrently to avoid the ACCESS
EXCLUSIVE lock and uses a SHARE UPDATE EXCLUSIVE lock?
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_get_expr() hangs with uncommitted DDL transaction

2014-11-11 Thread Thomas Kellerer
Hello,

I noticed the following behaviour in the JDBC driver:

In one transaction run an ALTER TABLE ADD COLUMN ... statement with auto-commit 
off, but don't commit the statement

In another transcation, call DatabaseMetaData.getColumns() for this table - 
this call will wait until the first transaction commits or rolls back.

I could narrow this down to the usage of pg_catalog.pg_get_expr() in the 
statement that the JDBC driver uses. 

A stripped down version of that statement is this:

SELECT a.attname, .
   pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc -- << this 
is the "problem"
FROM pg_catalog.pg_namespace n 
  JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid) 
  JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid) 
  LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND 
a.attnum = def.adnum) 
WHERE a.attnum > 0 
 AND NOT a.attisdropped 
 AND c.relname LIKE 'foo'
 AND n.nspname LIKE 'public';

When the call to pg_catalog.pg_get_expr() is removed from this statement, it 
will return even if the ALTER TABLE has not been committed.

Is there a reason why pg_catalog.pg_get_expr() will wait until the exclusive 
lock on the table is released? 
The value from pg_attrdef.adbin can be selected without any problems, so it 
appears to be something inside the function.

Versions used:

Postgres: PostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit
JDBC Driver: PostgreSQL 9.3 JDBC4 (build 1102)

Regards
Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trigger Before or After

2014-11-11 Thread Adrian Klaver

On 11/10/2014 10:38 PM, avpro avpro wrote:

hi,

in the pgsql documentation
(http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)

i haven't seen anything referring to: how is affected the data inserted
in the new table by a trigger Before Insert compared with a trigger
After Insert? and anything related to performance


See bottom of above page and here:

http://www.postgresql.org/docs/9.1/static/trigger-definition.html




thank you
have a sunny day




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_get_expr() hangs with uncommitted DDL transaction

2014-11-11 Thread Tom Lane
Thomas Kellerer  writes:
> Is there a reason why pg_catalog.pg_get_expr() will wait until the exclusive 
> lock on the table is released? 

Yes.  It needs to extract attribute names, data types, etc for the target
table and it would also like to be sure that that data is self-consistent.
So it takes out AccessShareLock the same as a query on the table would.

There have been periodic debates about how it would be nice if this and
related ruleutils.c functions would work based on the calling query's
snapshot instead of trying to provide up-to-date info.  However, short
of a rather massive rewrite (and, probably, a lot of duplicative code)
that's not going to happen :-(

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_get_expr() hangs with uncommitted DDL transaction

2014-11-11 Thread Thomas Kellerer
Tom Lane schrieb am 11.11.2014 um 16:35:
>> Is there a reason why pg_catalog.pg_get_expr() will wait until the exclusive 
>> lock on the table is released? 
> 
> Yes.  It needs to extract attribute names, data types, etc for the target
> table and it would also like to be sure that that data is self-consistent.
> So it takes out AccessShareLock the same as a query on the table would.

Not sure I understand this. 

Extracting the attribute names and data types from the various pg_catalog 
tables 
works _without_ the query being blocked. Even when selecting all arguments that 
are passed to pg_get_expr(). 

It's only when adding the function call to an otherwise perfectly working query
that this wait occurs.

Does this mean that pg_get_expr() selects additional data from the table, 
instead of just using the values that are provided by the caller?

Thomas











-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_get_expr() hangs with uncommitted DDL transaction

2014-11-11 Thread Tom Lane
Thomas Kellerer  writes:
> Does this mean that pg_get_expr() selects additional data from the table, 
> instead of just using the values that are provided by the caller?

No, it means it depends on backend code that is also used when accessing
the table "for real", and that code doesn't work safely without a lock.
(In particular, a lot of what ruleutils.c does ultimately reduces to
catalog cache lookups, and we can *not* allow stale data in those caches.)

In principle we could reimplement pg_get_expr and sibling routines as code
that just looks at the contents of the catalogs as they stood at the time
of the surrounding query's snapshot.  But that's not what they do today,
and getting from point A to point B would be a lot of work.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_get_expr() hangs with uncommitted DDL transaction

2014-11-11 Thread Thomas Kellerer
Tom Lane schrieb am 11.11.2014 um 17:08:
>> Does this mean that pg_get_expr() selects additional data from the table, 
>> instead of just using the values that are provided by the caller?
> 
> No, it means it depends on backend code that is also used when accessing
> the table "for real", and that code doesn't work safely without a lock.
> (In particular, a lot of what ruleutils.c does ultimately reduces to
> catalog cache lookups, and we can *not* allow stale data in those caches.)

Ah! Understood. Thanks for the explanation.

Thomas





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Modeling Friendship Relationships

2014-11-11 Thread Robert DiFalco
I have a question about modeling a mutual relationship. It seems basic but
I can't decide, maybe it is 6 of one a half dozen of the other.

In my system any user might be friends with another user, that means they
have a reciprocal friend relationship.

It seems I have two choices for modeling it.

1. I have a table with two columns userOne and userTwo. If John is friends
with Jane there will be one row for both of them.
2. I have a table with two columns owner and friend. If John is friends
with Jane there will be two rows, one that is {John, Jane} and another
{Jane, John}.

The first option has the advantage of saving table size. But queries are
more complex because to get John's friends I have to JOIN friends f ON
 f.userA = "John" OR f.userB = "John" (not the real query, these would be
id's but you get the idea).

In the second option the table rows would be 2x but the queries would be
simpler -- JOIN friends f ON f.owner = "John".

There could be >1M users. Each user would have <200 friends.

Thoughts? Do I just choose one or is there a clear winner? TIA!


[GENERAL] Autovacuum on partitioned tables in version 9.1

2014-11-11 Thread Nestor A. Diaz
Hello People,

Before the question, this is the scenario:

I have a postgresql 9.1 cluster with a size of 1.5 TB and composed of 70
databases.

In every database I have 50 tables (master partition), each one have an
associated trigger that insert the record into a child table of its own.

The partition is based on a week period, so every newly created tables
is of the form: tablename_wWW

The above configuration works great under postgres version 8.4 and
postgres version 9.1, except for one thing in the case of 9.1:

The autovacuum process on version 9.1 keeps vacuuming the master tables
and that takes a lot of time considering the master table have no
records of its own.

The trigger itself insert into the master table, then into the child and
then remove the record from the master, we do that way because we need
to get the inserted row info.

Actually we don't use any features of version 9.1 that are not available
under version 8.4, however I don't want to downgrade to version 8.4 as I
consider that I still have not understood completely how auto vacuuming
process works, and I need to improve on this.

Earlier in this list a user experienced the same behavior:
http://www.postgresql.org/message-id/flat/cabrmo8revvbbfhy-nxw2aknr+3awdzxepgwkgrnflhvtips...@mail.gmail.com
 
On the same thread another user wrote it could be an issue that is to be
resolved at minor version 9.1.8
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5454344b968d6a189219cfd49af609a3e7d6af33

I currently have  PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled
by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit running on an eight-core
processor and 24 GB RAM with the following options:

maintenance_work_mem = 1GB # pgtune wizard 2014-08-29
checkpoint_completion_target = 0.7 # pgtune wizard 2014-08-29
effective_cache_size = 15GB # pgtune wizard 2014-08-29
work_mem = 40MB # pgtune wizard 2014-08-29
wal_buffers = 4MB # pgtune wizard 2014-08-29
checkpoint_segments = 8 # pgtune wizard 2014-08-29
shared_buffers = 5GB # pgtune wizard 2014-08-29
max_connections = 500 # pgtune wizard 2014-08-29

In order to alleviate the I/O problem I disable autovacuum on all the
master tables like so:

ALTER TABLE public.tablename SET ( autovacuum_enabled = false,
toast.autovacuum_enabled = false);

But I know I can't left the database without vacuuming enabled because
of transaction ID wraparround.

So I need to set up a cron script for this; for every master table there
is a new child table every week then I can start a vacuum process via
cron for the table before the newly created, these tables are only used
for reading after a week.

But I need some clarification on this:

Why is postgresql starting a vacuum on master tables too often ? Why it
takes too much time on version 9.1 ? I guess because it needs to reclaim
unused space due to the insert/remove process, but in version 8.4 that
is unnoticeable.

How do I know which tables needs to be vacuumed ? any sql recipe ?

How do I check when I am near the limit of the transaction ID ?

Do I need to vacuum tables that haven't change a long time ago ? 

Anybody have experienced the same behavior and would like to comment on
this ?

Slds.

-- 
Nestor A. Diaz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Modeling Friendship Relationships

2014-11-11 Thread Rob Sargent

On 11/11/2014 03:38 PM, Robert DiFalco wrote:
I have a question about modeling a mutual relationship. It seems basic 
but I can't decide, maybe it is 6 of one a half dozen of the other.


In my system any user might be friends with another user, that means 
they have a reciprocal friend relationship.


It seems I have two choices for modeling it.

1. I have a table with two columns userOne and userTwo. If John is 
friends with Jane there will be one row for both of them.
2. I have a table with two columns owner and friend. If John is 
friends with Jane there will be two rows, one that is {John, Jane} and 
another {Jane, John}.


The first option has the advantage of saving table size. But queries 
are more complex because to get John's friends I have to JOIN friends 
f ON  f.userA = "John" OR f.userB = "John" (not the real query, these 
would be id's but you get the idea).


In the second option the table rows would be 2x but the queries would 
be simpler -- JOIN friends f ON f.owner = "John".


There could be >1M users. Each user would have <200 friends.

Thoughts? Do I just choose one or is there a clear winner? TIA!
did you consider a table with id-of-friendship/friend, unique on the 
pair, id-of-friendship lists all in the friendship. (Easy aggregate to 
get one-line per friendship).





Re: [GENERAL] Modeling Friendship Relationships

2014-11-11 Thread Steve Crawford

On 11/11/2014 02:38 PM, Robert DiFalco wrote:
I have a question about modeling a mutual relationship. It seems basic 
but I can't decide, maybe it is 6 of one a half dozen of the other.


In my system any user might be friends with another user, that means 
they have a reciprocal friend relationship.


It seems I have two choices for modeling it.

1. I have a table with two columns userOne and userTwo. If John is 
friends with Jane there will be one row for both of them.
2. I have a table with two columns owner and friend. If John is 
friends with Jane there will be two rows, one that is {John, Jane} and 
another {Jane, John}.


The first option has the advantage of saving table size. But queries 
are more complex because to get John's friends I have to JOIN friends 
f ON  f.userA = "John" OR f.userB = "John" (not the real query, these 
would be id's but you get the idea).


In the second option the table rows would be 2x but the queries would 
be simpler -- JOIN friends f ON f.owner = "John".


There could be >1M users. Each user would have <200 friends.

Thoughts? Do I just choose one or is there a clear winner? TIA!


What you are describing is basically an adjacency-list without any 
hierarchy information, i.e. there isn't a John reports to Dick reports 
to Jane type of tree.


One-million-users at 200 friends each would (order-of-magnitudeish) be 
200-million rows which tends to argue for saving space. It also reduces 
the number of rows impacted by deletes and avoids the risk of ending up 
with John,Jane without a corresponding Jane,John.


Getting John's friends isn't too complicated but I suspect the form of 
the query you gave won't lead to optimal query plans. For a two-column 
format I would imagine that ...userB as friend where userA='John' union 
userA as friend where userB='John'... would yield a more optimal plan 
assuming an index on each column.


I'm guessing that you will also want to study common-table-expressions 
and recursive queries (description and examples are available in the 
PostgreSQL docs) so you can start to write single queries to answer 
things like "list everyone who is a friend of a friend of John."


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] repmgr

2014-11-11 Thread Robin Ranjit Singh Chauhan
I set up repmgr and its working.  I confirmed that using:

select * from pg_stat_replication;
select pg_is_in_recovery();

However, on both master and slave there is only one entry
in repmgr_cane.repl_nodes : the master.

Is that expected?



On Mon, Nov 10, 2014 at 4:29 PM, John R Pierce  wrote:

> On 11/10/2014 4:10 PM, Robin Ranjit Singh Chauhan wrote:
>
>> I havent been able to find much about repmgr on postgres 9.3
>>
>> Is repmgr still a significant value add given the newer replication
>> features built in?
>>
>
> repmgr is a management tool for setting up and controlling the built in
> replication features.
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Robin R Chauhan
CEO, Pathway Intelligence Inc
ro...@pathwayi.com
Office: 778-588-6217 Ext. 201
Cell: 604-865-0517
http://pathwayi.com/


Re: [GENERAL] Autovacuum on partitioned tables in version 9.1

2014-11-11 Thread David G Johnston
TL;DR - the interaction of ANALYZE and inheritance hierarchies seems to be
broken for the uncommon use case where the inserts temporarily remain on the
master table in order to allow RETURNING to work.

Note - I have not played with this scenario personally but 

http://www.postgresql.org/message-id/flat/cabrmo8revvbbfhy-nxw2aknr+3awdzxepgwkgrnflhvtips...@mail.gmail.com#cabrmo8revvbbfhy-nxw2aknr+3awdzxepgwkgrnflhvtips...@mail.gmail.com
 

combined with this post leads me to that conclusion.


Nestor A. Diaz wrote
> The autovacuum process on version 9.1 keeps vacuuming the master tables
> and that takes a lot of time considering the master table have no
> records of its own.
> 
> The trigger itself insert into the master table, then into the child and
> then remove the record from the master, we do that way because we need
> to get the inserted row info.

If you say so...but the second paragraph makes your conclusion in the first
paragraph false.

You may wish to read up on Multi-Version Concurrency Control (MVCC)


> Actually we don't use any features of version 9.1 that are not available
> under version 8.4, however I don't want to downgrade to version 8.4 as I
> consider that I still have not understood completely how auto vacuuming
> process works, and I need to improve on this.

Well, you are actually using one right now - community support :)


> Earlier in this list a user experienced the same behavior:
> http://www.postgresql.org/message-id/flat/

> CABrmO8rEvvbBfhY-NxW2AkNr+3aWdzXEPgWkgrNFLHvTipSHyw@.gmail

>  
> On the same thread another user wrote it could be an issue that is to be
> resolved at minor version 9.1.8
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5454344b968d6a189219cfd49af609a3e7d6af33
> 
> I currently have  PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled
> by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit running on an eight-core
> processor and 24 GB RAM with the following options:

Given that 9.1.14 is current being on 9.1.9 is going to be a problem.

The referenced thread never came to conclusion - the OP never affirmed the
patch fixed their problem - but the patch mentioned affected vacuum while
the preceding conclusion in the post was that ANALYZE was the probable
culprit - specifically that the ANALYZE on the master table cascaded to all
of the children and so took however long it would take to analyze the entire
partition hierarchy.


> In order to alleviate the I/O problem I disable autovacuum on all the
> master tables like so:
> 
> ALTER TABLE public.tablename SET ( autovacuum_enabled = false,
> toast.autovacuum_enabled = false);
> 
> But I know I can't left the database without vacuuming enabled because
> of transaction ID wraparround.

If things get to a point where this is required it will run regardless of
your configuration.


> So I need to set up a cron script for this; for every master table there
> is a new child table every week then I can start a vacuum process via
> cron for the table before the newly created, these tables are only used
> for reading after a week.
> 
> But I need some clarification on this:
> 
> Why is postgresql starting a vacuum on master tables too often ? Why it
> takes too much time on version 9.1 ? I guess because it needs to reclaim
> unused space due to the insert/remove process, but in version 8.4 that
> is unnoticeable.

I'd be more concerned with the difference between 8.4 and 9.1 but if you are
indeed physically inserting and the deleting from the master table you need
some kind of vacuum if you want to reclaim that wasted space.

As noted above the ANALYZE is a probable culprit here - and its interaction
with inheritance seems under-documented and incompletely implemented.  I
think this would be more obvious but apparently most people do not write
their trigger sets to leave the inserted record in the master table so as
not to break RETURNING and then delete the record shortly thereafter. 
Someone from -hackers needs to comment on this use case and whether
something can and should be done to accommodate it.


> How do I know which tables needs to be vacuumed ? any sql recipe ?
> 
> How do I check when I am near the limit of the transaction ID ?
> Do I need to vacuum tables that haven't change a long time ago ?

If a vacuum freeze has been run on a table then in the absence of subsequent
updates it will not require vacuuming.
 

> Anybody have experienced the same behavior and would like to comment on
> this ?

David J.




--
View this message in context: 
http://postgresql.nabble.com/Autovacuum-on-partitioned-tables-in-version-9-1-tp5826595p5826603.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Modeling Friendship Relationships

2014-11-11 Thread Bill Moran
On Tue, 11 Nov 2014 14:38:16 -0800
Robert DiFalco  wrote:

> I have a question about modeling a mutual relationship. It seems basic but
> I can't decide, maybe it is 6 of one a half dozen of the other.
> 
> In my system any user might be friends with another user, that means they
> have a reciprocal friend relationship.
> 
> It seems I have two choices for modeling it.
> 
> 1. I have a table with two columns userOne and userTwo. If John is friends
> with Jane there will be one row for both of them.
> 2. I have a table with two columns owner and friend. If John is friends
> with Jane there will be two rows, one that is {John, Jane} and another
> {Jane, John}.
> 
> The first option has the advantage of saving table size. But queries are
> more complex because to get John's friends I have to JOIN friends f ON
>  f.userA = "John" OR f.userB = "John" (not the real query, these would be
> id's but you get the idea).
> 
> In the second option the table rows would be 2x but the queries would be
> simpler -- JOIN friends f ON f.owner = "John".
> 
> There could be >1M users. Each user would have <200 friends.
> 
> Thoughts? Do I just choose one or is there a clear winner? TIA!

I recommend a single row per relationship, because your estimates suggest that
the size might be big enough to be worth optimizing on the basis of size.

As far as optimizing queries and what not, I did this recently, and here's what
worked well for me.

Take this example table definition:

CREATE TABLE friendship (
 person1 INT NOT NULL,
 person2 INT NOT NULL,
 PRIMARY KEY (person1, person2),
 CHECK (person1 < person2)
);
CREATE INDEX friendship_person2 ON friendship(person2);

The check constraint guarantees the data will always be stored in a certain
order, which allows you to optimize many queries (i.e., when figuring out
wheter person 57 and person 86 are friends, the where clause is simplified
becuase you know that person1 can't be 86).

If you'll need to do queries of the "list all person 57's friends" variety,
then the queries are still pretty simple, but you could create a stored
procedure to make them even easier on the part of application developers.
It's basically "WHERE person1 = 57 or person2 = 57" which will be able to
use the indexes to provide quick results.  Or something more like:

SELECT person1 AS friend FROM friendship WHERE person2 = 57
UNION
SELECT person2 AS friend FROM friendship WHERE person1 = 57;

A view should work very well:
CREATE VIEW friendship_view AS
SELECT person1 AS person, person2 AS friend FROM friendship
UNION
SELECT person2 AS person, person1 AS friend FORM friendship;

That should be a very performant view when a WHERE clause on person is
specified.

Those types of queries weren't a requirement in the implementation I did,
as the code only ever asked "is person x a friend of person y" and never
wanted the entire list.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Modeling Friendship Relationships

2014-11-11 Thread John R Pierce
a difficulty of the single entry for joe<->bob  is that its hard to have 
a unique constraint across two fields. you'd want to ensure that 
joe+bob is unique and that there's no bob+joe




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Modeling Friendship Relationships

2014-11-11 Thread David G Johnston
John R Pierce wrote
> a difficulty of the single entry for joe<->bob  is that its hard to have 
> a unique constraint across two fields. you'd want to ensure that 
> joe+bob is unique and that there's no bob+joe

Bill's solution:

 PRIMARY KEY (person1, person2), 
 CHECK (person1 < person2) 

seems to make this constraint fairly simple...am I missing something?

Usage need permitting how difficult would setting up materialized views to
maintain arrays of "friend_of" and "friends_are" and simply unnest those
arrays if record-oriented access to the contained ids is required?  More
basically how performant (generally) are ~200 element arrays?

David J.




--
View this message in context: 
http://postgresql.nabble.com/Modeling-Friendship-Relationships-tp5826592p5826608.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Modeling Friendship Relationships

2014-11-11 Thread John R Pierce

On 11/11/2014 5:32 PM, David G Johnston wrote:

Bill's solution:

  PRIMARY KEY (person1, person2),
  CHECK (person1 < person2)

seems to make this constraint fairly simple...am I missing something?


oh, I guess I missed that part.   of course, you'll have to make sure 
you swap any relation into lesser,greater before inserting into this 
table, but such is life.I suppose a insert trigger could force this.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general