Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Tim Clarke
On 01/04/2021 02:51, Glen Huang wrote:
> Hi,
>
> I guess this question has been asked a million times, but all solutions I can 
> find online don’t really work well for my case. I’ll list them here and hope 
> someone can shed some light.
>
> My use case is to implement joining clubs that require entrance fee:
>
> 1. Each clubs only allows maximum number of members.
> 2. When a person decides to join a club, the seat is reserved for a limited 
> amount of time. If that person fails to pay within that period, the seat will 
> be open again
>
> I want to write a query that can quickly list all clubs that still have open 
> seats and #2 is where I want expiration to happen.
>
> The solutions I find so far:
>
> 1. Exclude closed clubs in queries and periodically delete expired members
>
> I can’t come up with a query that can accomplish this in an efficient way.
>
> WITH seated_member AS (
> SELECT
> club_id,
> count(member_id) AS num_seated_member
> FROM member
> WHERE paid OR join_time > now() - ‘1h’::interval
> GROUP BY club_id
> ),
> open_member AS (
> SELECT
> club_id,
> max_num_member - coalesce(num_seated_member, 0) AS num_open_member
> FROM club LEFT JOIN seated_member USING(club_id)
> )
> SELECT club_id AS open_club
> FROM open_member
> WHERE num_open_member > 0
>
> This requires going through all seated members, which can potentially be 
> large and takes a long time.
>
> I can of course add an num_open_member column to the club table and index it, 
> but the problem becomes how to automatically update it when a member expires, 
> which take us back to square one.
>
> All following solutions assume I add this column and seek to find a way to 
> update it automatically.
>
> 2. Run a cron job
>
> This won’t work because the number is updated only after the cron job is run, 
> which only happens at intervals.
>
> 3. Update the column before running any related queries
>
> This requires I execute DELETE and UPDATE queries before all seat related 
> queries. It’s hard to manage and seems to slow down all such queries.
>
> 4. pg_cron
>
> My environment wouldn’t allow me to install 3rd-party extensions, but even if 
> I could, it seems pg_cron run cron jobs sequentially. I’m not sure it works 
> well when I need to add a cron job for each newly joined member.
>
> —
>
> I’m not aware any other solutions. But the problem seems banal, and I believe 
> it has been solved for a long time. Would really appreciate it if someone 
> could at least point me in the right direction.
>
> Regards,
> Glen
>

Possibly keep your count of members updated via a trigger?

Tim Clarke


Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


Re: accessing cross-schema materialized views

2021-04-01 Thread Tim Clarke
On 31/03/2021 23:42, Tom Lane wrote:
> The owner of the schema2.usingview is the one who must have
> privilege to read the underlying schema1.matview.  Our
> permissions messages are, I fear, frequently not very good
> about saying whose privileges were checked.
>
> regards, tom lane


That nailed it Tom, thanks.

Tim Clarke MBCS
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420


Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


concurrent creation of sequences

2021-04-01 Thread Marc-Olaf Jaschke
Hi,

 
I have a use case where I need to create sequences concurrently. I use 
sequences instead of a table with counters to keep lock contention for 
incrementing the id low. The creation of new sequences rarely occurs during 
normal application operation.

I check via information_schema.sequences if a sequence already exists and catch 
errors that occur with concurrent access if the corresponding sequence already 
exists.
 
I wrote a stress test that simulates the behavior with heavy concurrency. I get 
the following three different error codes:
42P07
42710
23505

I would have expected only 42710 (duplicate_object).
 
Is this behavior intentional?
Are there possibly other error codes that the stress test did not uncover and 
that I should consider?
 
"create sequence if not exists ..." does not prevent errors with concurrency.
 
 
Tested with "PostgreSQL 12.6 on x86_64-apple-darwin16.7.0, compiled by Apple 
LLVM version 8.1.0 (clang-802.0.42), 64-bit"
 
 
Best regards,
Marc-Olaf Jaschke



Re: Looking for some help with HA / Log Log-Shipping

2021-04-01 Thread Adalberto Caccia
Hi,
Postgresql 11 already changed the position of replication parameters, now
all hosted in the main postgresql.conf file.
So on the standby node, instead of a recovery.conf file, just an empty
STANDBY.SIGNAL file is needed in the $PGDATA directory to start Postgresql
as a standby replica on that node.

However, as directed by the official documentation you've mentioned,
section 26.2 https://www.postgresql.org/docs/12/warm-standby.html
 really advises that
we set-up log-shipping to better support the stand-by recovery process, and
make sure we can easily recover from any transient failure.
I'm a very happy user of wal-g for this; I'm currently running 4 different
Postgresql clusters some pg10 ando some pg12, but wal-g is great for any of
them; I'm running them in Kubernetes, so I'm configuring wal-g directly via
the Environment of the Postgresql containers, making sure wal-g is
installed in my custom Postgresql image, of course. Then it is configured
just like this:

> archive_mode = always
>
> archive_command = 'wal-g wal-push %p'
>
> archive_timeout = 60
>
> restore_command = 'wal-g wal-fetch %f %p'
>
>
Adalberto



Il giorno mer 31 mar 2021 alle ore 17:39 Laurent FAILLIE <
l_fail...@yahoo.com> ha scritto:

> Replying to myself :)
>
> It seems pg_basebackup did all the tricks, even restarting the slave. And
> it is in standby mode.
> Do I have anything to do in addition ?
>
> Thanks
>
>
> Le mercredi 31 mars 2021 à 12:51:29 UTC+2, Laurent FAILLIE <
> l_fail...@yahoo.com> a écrit :
>
>
> Hello,
>
> We are running Postgresql 12 and I'm trying to put in place streaming wal
> replication.
>
> I followed officiale documentation (
> https://www.postgresql.org/docs/12/warm-standby.html ) as well as this
> tutorial :
>
> https://scalegrid.io/blog/managing-high-availability-in-postgresql-part-1/
>
> What I did is :
>
> * created the master db
> * create replication role
> * updated postgresql.conf
> * finally replicate everything on the salve node using pg_basebackup (with
> -R option)
>
> But now I'm lost with §26.2.4 Setting up a standby server.
>
> In which file I have to put the
>
> primary_conninfo
>
> and other options ?
>
> Thanks
>
> Laurent
>
> ps: my goal is to install the PAF
>


Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Glen Huang
> Possibly keep your count of members updated via a trigger?

But how to evoke the trigger when the count of members should be updated by a 
timeout, i.e., the person’s pay is due?



Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Tim Clarke
On 01/04/2021 14:28, Glen Huang wrote:
>> Possibly keep your count of members updated via a trigger?
> But how to evoke the trigger when the count of members should be updated by a 
> timeout, i.e., the person’s pay is due?


I'd run a cron job that triggers a function call which would make the
necessary expiry tests and set the status accordingly. Maybe run the
cron once an hour or once a day depending on the granularity of your needs?

Tim Clarke MBCS
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420


Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Glen Huang
Good suggestion, thanks.

I ended up running the cron jobs in my app, one for each newly joined member, 
to get the smallest granularity possible.

> On Apr 1, 2021, at 9:43 PM, Tim Clarke  wrote:
> 
> On 01/04/2021 14:28, Glen Huang wrote:
>>> Possibly keep your count of members updated via a trigger?
>> But how to evoke the trigger when the count of members should be updated by 
>> a timeout, i.e., the person’s pay is due?
> 
> 
> I'd run a cron job that triggers a function call which would make the
> necessary expiry tests and set the status accordingly. Maybe run the
> cron once an hour or once a day depending on the granularity of your needs?
> 
> Tim Clarke MBCS
> IT Director
> Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420
> 
> 
> Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | 
> Frankfurt: +49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 
> 503 2848
> Web: https://www.manifest.co.uk/
> 
> 
> 
> Minerva Analytics Ltd - A Solactive Company
> 9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United 
> Kingdom
> 
> 
> 
> Copyright: This e-mail may contain confidential or legally privileged 
> information. If you are not the named addressee you must not use or disclose 
> such information, instead please report it to 
> ad...@minerva.info
> Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
> Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
> Registered in England Number 2920820 Registered Office at above address. 
> Please Click Here https://www.manifest.co.uk/legal/ for further information.





Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Tim Clarke
On 01/04/2021 14:47, Glen Huang wrote:
> Good suggestion, thanks.
>
> I ended up running the cron jobs in my app, one for each newly joined member, 
> to get the smallest granularity possible.


(This list bottom-posts by convention)

I'd say that was onerous and you could get the same effect with a
well-crafted query that targetted only those that might possibly expire.
Then you'd have only one cron job to manage.

Tim Clarke MBCS
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420


Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Glen Huang
> I'd say that was onerous and you could get the same effect with a
well-crafted query that targetted only those that might possibly expire.

I wish one cron job could rule them all, but since a person can decide to join 
at any time, her expiration (e.g., after 1 hour) can also happen at any time. 
So one cron job won’t cut it if a member’s expiration has to bee accurate.



Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Tim Clarke

On 01/04/2021 15:23, Glen Huang wrote:
> I wish one cron job could rule them all, but since a person can decide to 
> join at any time, her expiration (e.g., after 1 hour) can also happen at any 
> time. So one cron job won’t cut it if a member’s expiration has to bee 
> accurate.


One cron job running every 5 minutes should do?


Tim Clarke MBCS
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420


Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread Glen Huang
Hmm, maybe letting people linger on for a couple more minutes isn’t 
unacceptable. And it simplifies the code a lot.

Good idea, thanks.

> On Apr 1, 2021, at 10:31 PM, Tim Clarke  wrote:
> 
> 
> On 01/04/2021 15:23, Glen Huang wrote:
>> I wish one cron job could rule them all, but since a person can decide to 
>> join at any time, her expiration (e.g., after 1 hour) can also happen at any 
>> time. So one cron job won’t cut it if a member’s expiration has to bee 
>> accurate.
> 
> 
> One cron job running every 5 minutes should do?
> 
> 
> Tim Clarke MBCS
> IT Director
> Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420
> 
> 
> Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | 
> Frankfurt: +49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 
> 503 2848
> Web: https://www.manifest.co.uk/
> 
> 
> 
> Minerva Analytics Ltd - A Solactive Company
> 9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United 
> Kingdom
> 
> 
> 
> Copyright: This e-mail may contain confidential or legally privileged 
> information. If you are not the named addressee you must not use or disclose 
> such information, instead please report it to 
> ad...@minerva.info
> Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
> Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
> Registered in England Number 2920820 Registered Office at above address. 
> Please Click Here https://www.manifest.co.uk/legal/ for further information.





Is replacing transactions with CTE a good idea?

2021-04-01 Thread Glen Huang
Hi all,

From application’s standpoint, it seems using CTE saves a lot work. You no 
longer need to parse values out only to pass them back in, and only one 
round-trip to the db server.

If I’m not wrong, CTE is equivalent to serializable transactions? So I guess 
the downsize is that quarries can’t be run in parallel?

If I decide to replace all my transaction code with CTE, will I shoot myself in 
the foot down the road?



Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Brian Dunavant
On Thu, Apr 1, 2021 at 10:49 AM Glen Huang  wrote:

> If I decide to replace all my transaction code with CTE, will I shoot
> myself in the foot down the road?
>

I do this all the time and makes code way cleaner.   It's very
straightforward with inserts queries.  When you deal with updates/deletes,
things can be trickier.  I usually leave these in a transaction if there is
any concern.

They can also be hard for future programmers that may not understand SQL.
Make sure you comment your queries for maintainability long term.

I have yet to regret replacing a transaction with a CTE over the past
decade.  YMMV


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Dave Cramer
On Thu, 1 Apr 2021 at 10:50, Glen Huang  wrote:

> Hi all,
>
> From application’s standpoint, it seems using CTE saves a lot work. You no
> longer need to parse values out only to pass them back in, and only one
> round-trip to the db server.
>
> If I’m not wrong, CTE is equivalent to serializable transactions? So I
> guess the downsize is that quarries can’t be run in parallel?
>

I do not think a CTE changes the isolation level.

>
> If I decide to replace all my transaction code with CTE, will I shoot
> myself in the foot down the road?
>


Dave Cramer
www.postgres.rocks


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Rob Sargent

On 4/1/21 8:58 AM, Brian Dunavant wrote:
On Thu, Apr 1, 2021 at 10:49 AM Glen Huang > wrote:


If I decide to replace all my transaction code with CTE, will I
shoot myself in the foot down the road?


I do this all the time and makes code way cleaner.   It's very 
straightforward with inserts queries.  When you deal with 
updates/deletes, things can be trickier.  I usually leave these in a 
transaction if there is any concern.


They can also be hard for future programmers that may not understand 
SQL.  Make sure you comment your queries for maintainability long term.


I have yet to regret replacing a transaction with a CTE over the past 
decade.  YMMV
This must assume auto-commit mode where every single statement is 
"committed"?


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Glen Huang
> When you deal with updates/deletes, things can be trickier

Care to expand why they are tricker? I presume they run the risk of being 
referenced more than once?

> On Apr 1, 2021, at 10:58 PM, Brian Dunavant  wrote:
> 
> 
>> On Thu, Apr 1, 2021 at 10:49 AM Glen Huang  wrote:
>> If I decide to replace all my transaction code with CTE, will I shoot myself 
>> in the foot down the road?
> 
> I do this all the time and makes code way cleaner.   It's very 
> straightforward with inserts queries.  When you deal with updates/deletes, 
> things can be trickier.  I usually leave these in a transaction if there is 
> any concern.
> 
> They can also be hard for future programmers that may not understand SQL.  
> Make sure you comment your queries for maintainability long term.  
> 
> I have yet to regret replacing a transaction with a CTE over the past decade. 
>  YMMV
>  


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Glen Huang
No, but are they equivalent to serializable transactions?

> On Apr 1, 2021, at 11:04 PM, Dave Cramer  wrote:
> 
> 
> 
> 
> 
>> On Thu, 1 Apr 2021 at 10:50, Glen Huang  wrote:
>> Hi all,
>> 
>> From application’s standpoint, it seems using CTE saves a lot work. You no 
>> longer need to parse values out only to pass them back in, and only one 
>> round-trip to the db server.
>> 
>> If I’m not wrong, CTE is equivalent to serializable transactions? So I guess 
>> the downsize is that quarries can’t be run in parallel?
> 
> I do not think a CTE changes the isolation level. 
>> 
>> If I decide to replace all my transaction code with CTE, will I shoot myself 
>> in the foot down the road?
> 
> 
> Dave Cramer
> www.postgres.rocks 


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Dave Cramer
On Thu, 1 Apr 2021 at 11:09, Glen Huang  wrote:

> No, but are they equivalent to serializable transactions?
>

No, they are not.



Dave Cramer
www.postgres.rocks

>
> On Apr 1, 2021, at 11:04 PM, Dave Cramer 
> wrote:
>
> 
>
>
>
> On Thu, 1 Apr 2021 at 10:50, Glen Huang  wrote:
>
>> Hi all,
>>
>> From application’s standpoint, it seems using CTE saves a lot work. You
>> no longer need to parse values out only to pass them back in, and only one
>> round-trip to the db server.
>>
>> If I’m not wrong, CTE is equivalent to serializable transactions? So I
>> guess the downsize is that quarries can’t be run in parallel?
>>
>
> I do not think a CTE changes the isolation level.
>
>>
>> If I decide to replace all my transaction code with CTE, will I shoot
>> myself in the foot down the road?
>>
>
>
> Dave Cramer
> www.postgres.rocks
>
>


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Glen Huang
Sorry, my mistake. I misunderstood serializable. Are queries in a CTE 
equivalent to those in a repeatable read transaction?

> On Apr 1, 2021, at 11:10 PM, Dave Cramer  wrote:
> 
> 
> 
> 
>> On Thu, 1 Apr 2021 at 11:09, Glen Huang  wrote:
>> No, but are they equivalent to serializable transactions?
> 
> No, they are not. 
> 
> 
> 
> Dave Cramer
> www.postgres.rocks
>> 
 On Apr 1, 2021, at 11:04 PM, Dave Cramer  wrote:
 
>>> 
>>> 
>>> 
>>> 
 On Thu, 1 Apr 2021 at 10:50, Glen Huang  wrote:
 Hi all,
 
 From application’s standpoint, it seems using CTE saves a lot work. You no 
 longer need to parse values out only to pass them back in, and only one 
 round-trip to the db server.
 
 If I’m not wrong, CTE is equivalent to serializable transactions? So I 
 guess the downsize is that quarries can’t be run in parallel?
>>> 
>>> I do not think a CTE changes the isolation level. 
 
 If I decide to replace all my transaction code with CTE, will I shoot 
 myself in the foot down the road?
>>> 
>>> 
>>> Dave Cramer
>>> www.postgres.rocks 


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Dave Cramer
CTE's don't change the isolation level. I'm not sure what you are getting
at here ?

Dave Cramer
www.postgres.rocks


On Thu, 1 Apr 2021 at 11:20, Glen Huang  wrote:

> Sorry, my mistake. I misunderstood serializable. Are queries in a CTE
> equivalent to those in a repeatable read transaction?
>
> On Apr 1, 2021, at 11:10 PM, Dave Cramer 
> wrote:
>
> 
>
>
> On Thu, 1 Apr 2021 at 11:09, Glen Huang  wrote:
>
>> No, but are they equivalent to serializable transactions?
>>
>
> No, they are not.
>
>
>
> Dave Cramer
> www.postgres.rocks
>
>>
>> On Apr 1, 2021, at 11:04 PM, Dave Cramer 
>> wrote:
>>
>> 
>>
>>
>>
>> On Thu, 1 Apr 2021 at 10:50, Glen Huang  wrote:
>>
>>> Hi all,
>>>
>>> From application’s standpoint, it seems using CTE saves a lot work. You
>>> no longer need to parse values out only to pass them back in, and only one
>>> round-trip to the db server.
>>>
>>> If I’m not wrong, CTE is equivalent to serializable transactions? So I
>>> guess the downsize is that quarries can’t be run in parallel?
>>>
>>
>> I do not think a CTE changes the isolation level.
>>
>>>
>>> If I decide to replace all my transaction code with CTE, will I shoot
>>> myself in the foot down the road?
>>>
>>
>>
>> Dave Cramer
>> www.postgres.rocks
>>
>>


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Glen Huang
Ah, I see what you mean. You still have to wrap a CTE inside a transaction to 
specify the isolation level? By default, queries in a CTE run with the read 
committed isolation level?

> On Apr 1, 2021, at 11:10 PM, Dave Cramer  wrote:
> 
> 
> 
> 
>> On Thu, 1 Apr 2021 at 11:09, Glen Huang  wrote:
>> No, but are they equivalent to serializable transactions?
> 
> No, they are not. 
> 
> 
> 
> Dave Cramer
> www.postgres.rocks
>> 
 On Apr 1, 2021, at 11:04 PM, Dave Cramer  wrote:
 
>>> 
>>> 
>>> 
>>> 
 On Thu, 1 Apr 2021 at 10:50, Glen Huang  wrote:
 Hi all,
 
 From application’s standpoint, it seems using CTE saves a lot work. You no 
 longer need to parse values out only to pass them back in, and only one 
 round-trip to the db server.
 
 If I’m not wrong, CTE is equivalent to serializable transactions? So I 
 guess the downsize is that quarries can’t be run in parallel?
>>> 
>>> I do not think a CTE changes the isolation level. 
 
 If I decide to replace all my transaction code with CTE, will I shoot 
 myself in the foot down the road?
>>> 
>>> 
>>> Dave Cramer
>>> www.postgres.rocks 


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Glen Huang
I had the impression that since they are chained together, somehow they run 
“tighter” 😂.

Thanks for pointing out that mistake.

> On Apr 1, 2021, at 11:25 PM, Dave Cramer  wrote:
> 
> 
> CTE's don't change the isolation level. I'm not sure what you are getting at 
> here ?
> 
> Dave Cramer
> www.postgres.rocks
> 
> 
>> On Thu, 1 Apr 2021 at 11:20, Glen Huang  wrote:
>> Sorry, my mistake. I misunderstood serializable. Are queries in a CTE 
>> equivalent to those in a repeatable read transaction?
>> 
 On Apr 1, 2021, at 11:10 PM, Dave Cramer  wrote:
 
>>> 
>>> 
>>> 
 On Thu, 1 Apr 2021 at 11:09, Glen Huang  wrote:
 No, but are they equivalent to serializable transactions?
>>> 
>>> No, they are not. 
>>> 
>>> 
>>> 
>>> Dave Cramer
>>> www.postgres.rocks
 
>> On Apr 1, 2021, at 11:04 PM, Dave Cramer  
>> wrote:
>> 
> 
> 
> 
> 
>> On Thu, 1 Apr 2021 at 10:50, Glen Huang  wrote:
>> Hi all,
>> 
>> From application’s standpoint, it seems using CTE saves a lot work. You 
>> no longer need to parse values out only to pass them back in, and only 
>> one round-trip to the db server.
>> 
>> If I’m not wrong, CTE is equivalent to serializable transactions? So I 
>> guess the downsize is that quarries can’t be run in parallel?
> 
> I do not think a CTE changes the isolation level. 
>> 
>> If I decide to replace all my transaction code with CTE, will I shoot 
>> myself in the foot down the road?
> 
> 
> Dave Cramer
> www.postgres.rocks 


storage in mem

2021-04-01 Thread Marc Millas
too my understanding, postgres is now architected so that engine and
storage are ""splited""
and so one may think that storage code  different that the
traditional postgres thing is possible.
I have heard of oracle like redo log things, ..etc

I wonder if a memory  thing is on the way, somewhere ?

The why (1) of this question:
its quite easy, nowadays to find blades with 2 TB of RAM. (or even much
more !)

and, to my understanding (correct me if I am wrong !) postgres shared
buffers are  not really efficient with "some" Terabyte of allocated space.

The why (2) I may have to do a POC with Sap HANA on one side and postgres
on the other
on machines with a lot of RAM.
so...
can someone shade some light ?

thanks,


Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Brian Dunavant
On Thu, Apr 1, 2021 at 11:06 AM Glen Huang  wrote:

> Care to expand why they are tricker? I presume they run the risk of being
> referenced more than once?
>

There are lots of gotchas. It's also been a few years since I dug deep into
this, so some of this may have changed in more recent versions.

* Changes in a CTE aren't visible to later CTEs since they haven't happened
yet.   Often times people are updating a table and then doing further
things and can hit situations they weren't expecting.

db=> create table foo ( a integer primary key );
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey"
for table "foo"
CREATE TABLE
db=> insert into foo values ( 1 );
INSERT 0 1
db=> with upt as ( update foo set a = 2 ) insert into foo values (1);
ERROR:  duplicate key value violates unique constraint "foo_pkey"
DETAIL:  Key (a)=(1) already exists.

* Unless you reference between the CTEs to force ordering, CTEs can happen
in any order, which can cause things to get out of the order people
expected.

* Just like you can cause deadlocks between threads in a transaction, you
can do the same thing by shoving all those statements into a single CTE
query.


Re: How to implement expiration in PostgreSQL?

2021-04-01 Thread David G. Johnston
On Thu, Apr 1, 2021 at 7:23 AM Glen Huang  wrote:

> > I'd say that was onerous and you could get the same effect with a
> well-crafted query that targetted only those that might possibly expire.
>
> I wish one cron job could rule them all, but since a person can decide to
> join at any time, her expiration (e.g., after 1 hour) can also happen at
> any time. So one cron job won’t cut it if a member’s expiration has to bee
> accurate.
>

There are indeed a number of moving parts here but I would suggest that
setting up a more static data model would be a well advised starting
position.  Write queries that take the expiration timestamp into account,
noting whether the time it represents is in the past or future.  The
reduced update burden will be a boon for both complexity of the updates
themselves as well as for your I/O subsystem that has to handle vacuuming
all of those dead tuples.

David J.


Slick way to update multiple tables.

2021-04-01 Thread Theodore M Rolle, Jr.
I SELECT items from three tables.

It seems that the only way to put updated fields into their proper table is
to query *each* field, then UPDATE *each* table.

Am I missing something here?
Is there a way to do this automatically?

-- 
 GnuPG/PGP key: 0xDD4276BA
 
+-+
 | 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510  |
 |   58209 74944[59230 78164]06286 20899 86280
+--|
 |   34825 34211 70679*82148 08651 32823 06647 |May the spirit
   |
 |   09384 46095 50582 23172 53594 08128 48111  |  of π spread
   |
 |   74502 84102 70193 85211 05559 64462 29489 |around the world.
  |
 |   54930 38196 44288 10975 66593 34461 28475 |  PI VOBISCUM!
 |
 |   38196 44288 10975 66593 34461 28475 64823
+-|
 |   37867 83165 27120 19091 45648 56692 34603 48610 45432 6648... |
 
++


Re: Slick way to update multiple tables.

2021-04-01 Thread Michael Lewis
You can have an updatable view.


Re: Slick way to update multiple tables.

2021-04-01 Thread Theodore M Rolle, Jr.
On Thu, Apr 1, 2021 at 12:43 PM Michael Lewis  wrote:

> You can have an updatable view.
>

*Show-stopper?*
The defining query of the view must have exactly one entry in the FROM clause,
which can be a table or another updatable view.

Or multiple VIEWs? I've never done VIEWs...


-- 
 GnuPG/PGP key: 0xDD4276BA
 
+-+
 | 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510  |
 |   58209 74944[59230 78164]06286 20899 86280
+--|
 |   34825 34211 70679*82148 08651 32823 06647 |May the spirit
   |
 |   09384 46095 50582 23172 53594 08128 48111  |  of π spread
   |
 |   74502 84102 70193 85211 05559 64462 29489 |around the world.
  |
 |   54930 38196 44288 10975 66593 34461 28475 |  PI VOBISCUM!
 |
 |   38196 44288 10975 66593 34461 28475 64823
+-|
 |   37867 83165 27120 19091 45648 56692 34603 48610 45432 6648... |
 
++


Re: Slick way to update multiple tables.

2021-04-01 Thread Michael Lewis
Joins are allowed though.


Re: questions about wraparound

2021-04-01 Thread Jehan-Guillaume de Rorthais
Hi Luca,

On Mon, 22 Mar 2021 08:56:46 +0100
Luca Ferrari  wrote:

> I can confirm that freezing a template database is done by means of setting
> it age to zero.
[...]
> and here it is the situation after a restart:
> 
> testdb=> select datname, age( datfrozenxid ) from pg_database;  
>   datname  |   age
> ---+--
>  postgres  | 1234
>  backupdb  | 5000
>  template1 |0
>  template0 |0
>  testdb|0
>  pgbench   | 5000

The difference between a "vacuum" and "vacuum freeze" is whether the vacuum
process must scan non-frozen blocks as well, according to the visibility map.
The later is called "aggressive" vacuum because it scan all blocks, even the
clean ones, as far as they are not already marked as frozen in the visibility
map.

Whatever the vacuum you are launching, if the process find a rows older than
vacuum_freeze_min_age, it freezes it. Agressive vacuum is not different and
respect vacuum_freeze_min_age as well. That's why your oldest row in each
database is 5000 after a "vacuum freeze" (aka. aggressive vacuum, aka.
"vacuum to avoid wraparound").

Try to temporary set vacuum_freeze_min_age=4500 and freeze_table_age=0, then
run a simple vacuum on your database. This will effectively freeze you database
and set its age to the oldest row: your new vacuum_freeze_min_age, 4500.

Considering the template databases, the default vacuum_freeze_min_age is forced
to 0 in source code. That's why you find a different age between template
databases and others after a freeze.

In regard with the databases age moving altogether, even when only one of them
is receiving writes. The XID space is shared between all the database. In other
words, a transaction can not be used in two different database, unless they
apply on shared relations (those in pg_global tablespace) and probably cloned
ones from templates.
So if a database alone is consuming XIDs all other are getting older and older
and will eventually need a vacuum.

> I'm not able to find this behavior in the documentation however,

I don't know if it is explained somewhere in doc, I couldn't find it either.
But you can find this information in function "do_autovacuum()" in
src/backend/postmaster/autovacuum.c:

  /*
   * Find the pg_database entry and select the default freeze ages. We use
   * zero in template and nonconnectable databases, else the system-wide
   * default.
   */
  tuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(MyDatabaseId));
  if (!HeapTupleIsValid(tuple))
  elog(ERROR, "cache lookup failed for database %u", MyDatabaseId);
  dbForm = (Form_pg_database) GETSTRUCT(tuple);
  
  if (dbForm->datistemplate || !dbForm->datallowconn)
  {
  default_freeze_min_age = 0;
  default_freeze_table_age = 0;
  default_multixact_freeze_min_age = 0;
  default_multixact_freeze_table_age = 0;
  }
  else
  {
  default_freeze_min_age = vacuum_freeze_min_age;
  default_freeze_table_age = vacuum_freeze_table_age;
  default_multixact_freeze_min_age = vacuum_multixact_freeze_min_age;
  default_multixact_freeze_table_age = vacuum_multixact_freeze_table_age;
  }

> and still don't understand why a template database should have a different
> behavior (at least, I can imagine only to reduce the future workload of
> vacuuming a template database).

I suspect that it helps creating database with already frozen blocs, from a full
frozen template.

Regards,




Re: Slick way to update multiple tables.

2021-04-01 Thread Michael Lewis
postgresql.org/docs/current/sql-createview.html

My apologies. It seems INSTEAD OF triggers are required to implement
updates across multiple tables. I thought not if all were simple joins. My
mistake.


Re: Slick way to update multiple tables.

2021-04-01 Thread Paul Jungwirth

On 4/1/21 11:54 AM, Michael Lewis wrote:
postgresql.org/docs/current/sql-createview.html 



My apologies. It seems INSTEAD OF triggers are required to implement 
updates across multiple tables. I thought not if all were simple joins. 
My mistake.


Even with INSTEAD OF triggers, if you use a view then I suppose you 
would be forced to update some of the records more often that necessary? 
(Unless your tables are 1-to-1-to-1 of course.) Or if there is some 
trick to avoid that I'd be curious to know about it.


Here is something I've done in the past:

WITH
update1(ok) AS (
  UPDATE foo SET ... WHERE ...
  RETURNING 'ok'
),
update2(ok) AS (
  UPDATE bar SET ... WHERE ...
  RETURNING 'ok'
),
update3(ok) AS (
  UPDATE baz SET ... WHERE ...
  RETURNING 'ok'
)
SELECT ok FROM update1
UNION ALL
SELECT ok FROM update2
UNION ALL
SELECT ok FROM update3
;

You could even select different messages from each update if you want to 
know how many rows you touched in each table.


--
Paul  ~{:-)
p...@illuminatedcomputing.com




Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Bruce Momjian
On Thu, Apr  1, 2021 at 11:24:48AM -0400, Dave Cramer wrote:
> CTE's don't change the isolation level. I'm not sure what you are getting at
> here ?

I think what he/she means here is that all queries in a CTE use a single
snapshot, meaning you don't see changes by commits that happen between
queries that are part of the same CTE.  If you were running the queries
separately in read committed mode, you would see those changes, but you
would not see them in repeatable read or serializable transaction mode.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: Upgrading from 11 to 13

2021-04-01 Thread Bruce Momjian
On Tue, Mar 30, 2021 at 08:30:00AM -0700, David G. Johnston wrote:
> On Tue, Mar 30, 2021 at 8:25 AM Daniel Westermann (DWE) <
> daniel.westerm...@dbi-services.com> wrote:
> The best place would be the release notes, I guess. Right at the beginning
> here:
> 
> Release notes are probably a good place too but there is already a section in
> the main documentation where this needs to be added.
> 
> https://www.postgresql.org/docs/current/upgrading.html
> 
> Which is linked to from the main website.
> 
> https://www.postgresql.org/support/versioning/
> 
> An explicit sentence stating that major upgrades can skip major versions is
> needed.  The document is written assuming the reading knows this, and just
> makes a few minor notes on the topic:
> e.g., "If you are upgrading across several major versions, be sure to read the
> release notes for each intervening version."

I have two patches --- one for our website, and another for our docs,
though not for the release notes specifically.  The release notes do
reference this doc section though.

My idea was to link the ability to skip upgrading to intervening
versions with the need to read intervening version release notes.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.

diff --git a/templates/support/versioning.html b/templates/support/versioning.html
index 45c039bf..98894f1d 100644
--- a/templates/support/versioning.html
+++ b/templates/support/versioning.html
@@ -58,7 +58,9 @@ compatibility of all stored data.  A dump/reload of the database or use of the
 pg_upgrade module is required
 for major upgrades. We also recommend reading the
 upgrading section of the major
-version you are planning to upgrade to.
+version you are planning to upgrade to.  Though you can upgrade from
+one major version to another without upgrading to intervening versions,
+you should read the major release notes of all intervening versions.
 
 
 
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index bf877c0e0c..001d195b8e 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -1692,9 +1692,9 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`);  pay particular attention to the section
-   labeled "Migration".  If you are upgrading across several major
-   versions, be sure to read the release notes for each intervening
-   version.
+   labeled "Migration".  Though you can upgrade from one major version
+   to another without upgrading to intervening versions, you should read
+   the major release notes of all intervening versions.
   
 
   


Re: Upgrading from 11 to 13

2021-04-01 Thread Jonathan S. Katz
On 4/1/21 6:06 PM, Bruce Momjian wrote:
An explicit sentence stating that major upgrades can skip major versions is
>> needed.  The document is written assuming the reading knows this, and just
>> makes a few minor notes on the topic:
>> e.g., "If you are upgrading across several major versions, be sure to read 
>> the
>> release notes for each intervening version."
> 
> I have two patches --- one for our website

Reviewed, made a couple of edits, and pushed to the website[1].

Jonathan

[1] https://www.postgresql.org/support/versioning/



OpenPGP_signature
Description: OpenPGP digital signature


Re: Upgrading from 11 to 13

2021-04-01 Thread Koen De Groote
I seem to recall that going from 11 to 12, a certain configuration file was
removed and the keys are now expected to be set in the regular
configuration file? The logic being there should only ever be 1
configuration file.

I can't find it, but at the same time I don't recall what it's called. I
believe it has to do with streaming replication?

Is this a thing or am I imagining stuff?

On Fri, Apr 2, 2021 at 12:34 AM Bruce Momjian  wrote:

> On Tue, Mar 30, 2021 at 08:30:00AM -0700, David G. Johnston wrote:
> > On Tue, Mar 30, 2021 at 8:25 AM Daniel Westermann (DWE) <
> > daniel.westerm...@dbi-services.com> wrote:
> > The best place would be the release notes, I guess. Right at the
> beginning
> > here:
> >
> > Release notes are probably a good place too but there is already a
> section in
> > the main documentation where this needs to be added.
> >
> > https://www.postgresql.org/docs/current/upgrading.html
> >
> > Which is linked to from the main website.
> >
> > https://www.postgresql.org/support/versioning/
> >
> > An explicit sentence stating that major upgrades can skip major versions
> is
> > needed.  The document is written assuming the reading knows this, and
> just
> > makes a few minor notes on the topic:
> > e.g., "If you are upgrading across several major versions, be sure to
> read the
> > release notes for each intervening version."
>
> I have two patches --- one for our website, and another for our docs,
> though not for the release notes specifically.  The release notes do
> reference this doc section though.
>
> My idea was to link the ability to skip upgrading to intervening
> versions with the need to read intervening version release notes.
>
> --
>   Bruce Momjian  https://momjian.us
>   EDB  https://enterprisedb.com
>
>   If only the physical world exists, free will is an illusion.
>
>


Re: Upgrading from 11 to 13

2021-04-01 Thread Alvaro Herrera
On 2021-Apr-02, Koen De Groote wrote:

> I seem to recall that going from 11 to 12, a certain configuration file was
> removed and the keys are now expected to be set in the regular
> configuration file? The logic being there should only ever be 1
> configuration file.
> 
> I can't find it, but at the same time I don't recall what it's called. I
> believe it has to do with streaming replication?
> 
> Is this a thing or am I imagining stuff?

recovery.conf to postgresql.conf?  Yes, you're right.  (There are more
reasons beyond "just one config file", though.)

-- 
Álvaro Herrera39°49'30"S 73°17'W
"Someone said that it is at least an order of magnitude more work to do
production software than a prototype. I think he is wrong by at least
an order of magnitude."  (Brian Kernighan)




Re: Upgrading from 11 to 13

2021-04-01 Thread Bruce Momjian
On Thu, Apr  1, 2021 at 07:26:14PM -0400, Jonathan Katz wrote:
> On 4/1/21 6:06 PM, Bruce Momjian wrote:
> An explicit sentence stating that major upgrades can skip major versions is
> >> needed.  The document is written assuming the reading knows this, and just
> >> makes a few minor notes on the topic:
> >> e.g., "If you are upgrading across several major versions, be sure to read 
> >> the
> >> release notes for each intervening version."
> > 
> > I have two patches --- one for our website
> 
> Reviewed, made a couple of edits, and pushed to the website[1].
> 
> Jonathan
> 
> [1] https://www.postgresql.org/support/versioning/

Patch applied to the docs.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: Upgrading from 11 to 13

2021-04-01 Thread Bruce Momjian


Here it is with descriptions:

Hepatitis A
Dates on file: 09/01/2005, 01/27/2005

IPV (Polio)
Dates on file: 09/01/2005

Influenza Vaccine Quadrivalent 3 Yr And Older
Dates on file: 08/25/2020, 10/19/2015

Influenza Vaccine Quadrivalent Preservative Free 6 Mons and Up
Dates on file: 08/25/2020, 11/26/2018

Influenza Vaccine Quadrivalent Preservative Free 6-35 Months
Dates on file: 10/30/2017, 10/26/2016, 10/07/2013

MMR (Measles, Mumps, and Rubella)
Dates on file: 06/27/2006

Td (Tetanus)
Dates on file: 01/27/2005

Tdap (diphtheria, pertussis, and tetanus)
Dates on file: 10/07/2013

Typhoid Live
Dates on file: 01/24/2017, 03/03/2011

Yellow Fever
Dates on file: 03/03/2011

Zoster Vaccine Recombinant Adjuvanted (Shingrix)
Dates on file: 10/31/2020, 08/25/2020

---

On Thu, Apr  1, 2021 at 09:18:07PM -0400, Bruce Momjian wrote:
> On Thu, Apr  1, 2021 at 07:26:14PM -0400, Jonathan Katz wrote:
> > On 4/1/21 6:06 PM, Bruce Momjian wrote:
> > An explicit sentence stating that major upgrades can skip major versions is
> > >> needed.  The document is written assuming the reading knows this, and 
> > >> just
> > >> makes a few minor notes on the topic:
> > >> e.g., "If you are upgrading across several major versions, be sure to 
> > >> read the
> > >> release notes for each intervening version."
> > > 
> > > I have two patches --- one for our website
> > 
> > Reviewed, made a couple of edits, and pushed to the website[1].
> > 
> > Jonathan
> > 
> > [1] https://www.postgresql.org/support/versioning/
> 
> Patch applied to the docs.
> 
> -- 
>   Bruce Momjian  https://momjian.us
>   EDB  https://enterprisedb.com
> 
>   If only the physical world exists, free will is an illusion.
> 
> 
> 

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: Upgrading from 11 to 13

2021-04-01 Thread Bruce Momjian
On Thu, Apr  1, 2021 at 09:55:28PM -0400, Bruce Momjian wrote:
> 
> Here it is with descriptions:

Sorry, please ignore.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: Primary keys and composite unique keys(basic question)

2021-04-01 Thread Merlin Moncure
On Wed, Mar 31, 2021 at 3:36 AM Mohan Radhakrishnan
 wrote:
>
> Hello,
>  We have UUIDs in our tables which are primary keys. But in some 
> cases
> we also identify a composite unique key apart from the primary key.
>
> My assumption is that there should be a unique key index created by us using 
> the composite key. And when we fetch using this composite key instead of the 
> primary key we have a performance boost due to the index.
>
> Is this right ? Are there more details I should read to understand this 
> better ? Please point. If I should use the query planner to look at the 
> statistics I will.

This is one of the great debates in computer science and it is not
settled.  There are various tradeoffs around using a composite key
derived from the data (aka natural key) vs generated identifiers. It's
a complex topic with many facets: performance, organization,
validation, and correctness are all relevant considerations.  I would
never use UUIDS for keys though.

merlin




Re: Primary keys and composite unique keys(basic question)

2021-04-01 Thread Rob Sargent

On 4/1/21 8:28 PM, Merlin Moncure wrote:


This is one of the great debates in computer science and it is not
settled.  There are various tradeoffs around using a composite key
derived from the data (aka natural key) vs generated identifiers. It's
a complex topic with many facets: performance, organization,
validation, and correctness are all relevant considerations.  I would
never use UUIDS for keys though.

merlin


And, pray tell, for what exactly would you use universally unique 
identifiers.





Re: Primary keys and composite unique keys(basic question)

2021-04-01 Thread Hemil Ruparel
I used uuid4 for customer ids because i needed to interface with payment
providers. Is that wrong? All other places except transaction ids, i have
used serial ints

On Fri 2 Apr, 2021, 8:56 AM Rob Sargent,  wrote:

> On 4/1/21 8:28 PM, Merlin Moncure wrote:
> >
> > This is one of the great debates in computer science and it is not
> > settled.  There are various tradeoffs around using a composite key
> > derived from the data (aka natural key) vs generated identifiers. It's
> > a complex topic with many facets: performance, organization,
> > validation, and correctness are all relevant considerations.  I would
> > never use UUIDS for keys though.
> >
> > merlin
> >
> >
> And, pray tell, for what exactly would you use universally unique
> identifiers.
>
>
>